For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

cjunior_138458's avatar
cjunior_138458
Icon for Altostratus rankAltostratus
Jun 06, 2014

LTM using own VIP as a pool member

Hi, My customer has an environment that needs to work in the flow below:

LTM
VIP1: 177.x.x.x:443
  Pool: ltm_frontend_pool > Priority Group Activation: Less than 1
  members:
    10.40.1.1:80 priority 10
    10.50.1.1:8443 priority 0

VIP2: 10.50.1.1:8443
   Pool: ltm_backend_pool
   members:
     10.x.x.x:443 backend servers

ASM VIP
VIP1: 10.40.1.1:80
  Pool: asm_frontend_pool
  members:
      10.50.1.1:8443

VIP1 LTM (client ssl) >> SNAT >> VIP1 ASM >> SNAT >>  VIP2 LTM >> (server ssl) >> servers

(A little bit confused, sorry!)

When ASM is up, its all OK. But when ASM bypass in priority group, the problem occurs, the VIP2 inside ltm_frontend_pool do not responds. In some F5 solution, I read that VIP in the same equipment not responds arp. So, we need to use the statement "virtual" in irule.

I did the irule bellow and it works fine.

Finally my question is: In the scenary above, has another way, maybe more simple, to solve this case?

iRule:

dg_ltm_forced_vips => contains de LTM VIP and port to be forced LTM backend.
dg_ltm_forced_failed_vips => contains de ASM VIP and port to be forced reselect LTM backend.

when HTTP_REQUEST {

 Check the first active member of default pool
 If the VIP is its own virtual server address, it needs to be forced to navigate into the second VIP of the same LTM equipment

set memberList [active_members -list [LB::server pool]]
log local0. "Active members: $memberList"

set vip [concat [lindex [lindex $memberList 0] 0]:[lindex [lindex $memberList 0] 1]]
log local0. "===========> First VIP: $vip"

 Check the vip in the list
set virtual_name [class match -value $vip equals dg_ltm_forced_vips]

if { $virtual_name ne "" } {
    log local0. "=========== set virtual ltm: $virtual_name"
    virtual $virtual_name
}

unset vip
unset virtual_name
}

when LB_SELECTED  {
log local0. "===========> Selected server: [LB::server addr]:[LB::server port]"
}

when LB_FAILED {
log local0. "=========== Failed server: [LB::server addr]:[LB::server port]"

 if server failed, reselect according the list
set virtual_name [class match -value [concat [LB::server addr]:[LB::server port]] equals dg_ltm_forced_failed_vips]
if { $virtual_name ne "" } {
    log local0. "=========== reselect virtual ltm: $virtual_name"
    LB::reselect virtual $virtual_name
}
}

2 Replies

  • In the scenary above, has another way, maybe more simple, to solve this case?

     

    what about having only 10.40.1.1:80 in ltm_frontend_pool and sending request to 10.50.1.1:8443 (or 10.x.x.x:443 directly) when ltm_frontend_pool is down or in LB_FAILED?

     

  • First, thank you nitass. I choosed the data group because they had many VIPs and the LTM owners don't want hardcoded addresses. So, they will maintain this data with tmsh command "modify ltm data-group.......". Your suggestion about catch the event when load balance failed, can be useful. I knew about then problem "VIP to VIP" only when the eighteen VIPs are ready to work but did not. I kept the original configuration with priority group to be less labor intensive since it would only add a irule. I don't know about the process cost of my code, is this the cause of my discomfort, and I needed to keep the direction of traffic for the second VIP because I need to encrypt back. I guessed that someone had gone through a similar situation. So, thanks again.