Forum Discussion
VIP monitoring multiple Pools
Our application is having a specific requirement where based on the URI we route the request between multiple Pools using an irule.
The problem we are having is on health check part, where currently VIP monitors the health of default Pool only.
In our above application scenario, though all members are down under a pool which is not default pool, VIP is still directing the traffic to that Pool.
We are trying to see a possibility where the VIP health status is determined based on multiple Pools health status. Is there anyway we can achieve that?
- Dario_Garrido
Noctilucent
Hello Manikanta.
Check this video, you have the answer at the end ->
https://youtu.be/4uRZDAZNPRI
KR,
Dario.
- JG
Cumulonimbus
A very basic example:
when HTTP_REQUEST { if { [ active_members pool_1 ] < 1 or [ active_members pool_2 ] < 1 or [ active_members pool_3 ] < 1 } { HTTP::respond 503 content { <html> <head> <title>Service Error</title> </head> <body> <font color="red">We are sorry, but the site you are trying to access is currently unavailable.<p> </body> </html> } "Content-Type" "text/html" } }
- Manikanta
Nimbostratus
Thanks Dario and JG.
In my use case, I don't want to respond to client. we have backup. I just want to mark VIP down if any of the Pools are down, so the our BIG IP- DNS will not provide that IP to the client.
Basically our setup looks like below,
VIP
Default Pool : A
Other pools: B,C,D
Traffic between Pools being handled by an Irule with URI mapping.
We would like to see if there is any way where active_members of Pool A or B or C or D < 1 then Mark the VIP down/Red or Tell Big-Ip DNS that I am down.
Can we do that?
- Dario_Garrido
Noctilucent
Are you running an old release?
This is not a normal behavior now.
My config...
ltm virtual VS-TEST_2000 { destination 10.130.40.150:sieve-filter ip-protocol tcp mask 255.255.255.255 profiles { http { } tcp { } } rules { RULE_MarkDown } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vs-index 18 }
I'm forcing pool state down using UDP monitor.
ltm pool P-ABC_80 { members { N-WEB1_10.1.1.1:http { address 10.1.1.1 session monitor-enabled state down } } monitor udp } ltm pool P-DEF_80 { members { N-WEB2_10.1.1.2:http { address 10.1.1.2 session monitor-enabled state down } } monitor udp }
iRule
when HTTP_REQUEST { set uri [HTTP::uri] if { $uri starts_with "bla" } { pool /Common/P-ABC_80 } elseif { $uri starts_with "ble" } { pool /Common/P-DEF_80 } else { drop } }
KR,
Dario.
- Manikanta
Nimbostratus
Hi Dario,
Which pool you assigned to the VIP. Is it P-ABC_80 or P-DEF_80? If either of the pools (P-ABC_80 or P-DEF_80) is down then the entire VIP will be marked down?
How you are doing this? I am thinking VIP health status is based on the default Pool health status.
By the way we are running 12.1.3 code.
- Dario_Garrido
Noctilucent
No, the status of the VS depends of all the pools assigned to the it.
In the last example I'm not using default pool, but it's the same with it.
ltm virtual VS-TEST_2000 { destination 10.130.40.150:sieve-filter ip-protocol tcp mask 255.255.255.255 pool P-GHI_80 profiles { http { } tcp { } } rules { RULE_MarkDown } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vs-index 18 }
In 12.1.3 the behavior is the same...
Please, share your config (VS, Pool, iRule,...)
KR,
Dario.
- Manikanta
Nimbostratus
Hi,
I don't see an option of adding multiple POOLs to one VIP. How are you adding?
- Dario_Garrido
Noctilucent
Pool P-GHI_80 was added as default pool, the rest of them (P-ABC_80 and P-DEF_80) were added only to the iRule.
As I said, share your config (VS, Pool, iRule).
- Manikanta
Nimbostratus
- JG
Cumulonimbus
You have not specified how your DNS server monitors the virtual server status. irule is used to respond to an end-user request only.
- Manikanta
Nimbostratus
JG,
BIG-IP DNS monitors virtual server health status located at different data centers. Based on the availability it will provide the IP for the request in specified load balancing method
- Manikanta
Nimbostratus
Dario,
Here is the config,
VS:
ltm virtual test_443_VIP {
destination 10.xx.xx.xx:443
ip-protocol tcp
mask 255.255.255.255
pool abc_POOL
profiles {
rdmh_2018 {
context clientside
}
http { }
tcp { }
}
rules {
uri_forward
}
source 0.0.0.0/0
source-address-translation {
pool UP_snatpool
type snat
}
translate-address enabled
translate-port enabled
vs-index 19
}
Pools:
ltm pool ghi_POOL {
members {
x1:8082 {
address 10.xx.xx.xx
session monitor-enabled
state down
}
x2:8082 {
address 10.xx.xx.xx
session monitor-enabled
state down
}
}
monitor http_keepalive_html
}
ltm pool jkl_POOL {
members {
x3:4082 {
address 10.xx.xx.xx
session monitor-enabled
state down
}
x4:4082 {
address 10.xx.xx.xx
session monitor-enabled
state down
}
}
monitor http_keepalive_html
}
ltm pool abc_POOL {
members {
a1:8081 {
address 10.xx.xx.xx
session monitor-enabled
state up
}
a2:8081 {
address 10.xx.xx.xx
session monitor-enabled
state up
}
}
monitor http_keepalive_html
}
ltm pool def_POOL {
members {
b1:8084 {
address 10.xx.xx.xx
session monitor-enabled
state up
}
b2:8084 {
address 10.xx.xx.xx
session monitor-enabled
state up
}
}
monitor http_keepalive_html
}
irule:
when HTTP_REQUEST {
switch -glob [string tolower [HTTP::uri]] {
"/abc*" {
pool abc_POOL
}
"/def*" {
pool def_POOL
}
"/ghi*" {
pool ghi_POOL
}
"/jkl*" {
pool jkl_POOL
}
default {
reject
}
}
}
Here in my config my default pool is abc_POOL.other pools were added in irule only. though ghi_POOL, jkl_POOL were down, VS is still showing UP.
I am looking for a way where if any of the Pools (abc_POOL, def_POOL, ghi_POOL, jkl_POOL) is down, then I want to mark the Virtual Server down.
- Dario_Garrido
Noctilucent
The normal behaviour is to not mark VS 'down' if at least one of the configured pools is 'up'.
I don't know if it's possible, but you can try to create an iRule with 'active_members' to compare if one of the pools is mark as 'down'. After that, you can use 'LB::down' to force all pool members down.
https://clouddocs.f5.com/api/irules/active_members.html
https://clouddocs.f5.com/api/irules/LB__down.html
Base on this info, I don't know for sure if this solution is feasible:
"Note: Calling LB::down in an iRule triggers an immediate monitor probe regardless of the monitor interval settings."
Other thing you can do and it's feasible it's to configure an iCall which runs when one of those pools is down for disabling the VS.
REF - https://devcentral.f5.com/s/articles/what-is-icall-27404
KR,
Dario.
- JG
Cumulonimbus
You can set up monitors that poll the URLs that reach each of the Web service pools, and then apply these monitors to your DNS health checking system.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com