cancel
Showing results for 
Search instead for 
Did you mean: 

Query on GTM irule based on Pool Availability

Hello,

 

I am very beginner to iRule creation. In GTM I tried to create irule as below. But getting error.

 

Our intention is we need to reroute the DNS query to different pool based on client IP and pool availability.

 

Condition:

If the client IP match and the pool is available then need to go normal pool

If the client IP match and the pool is not available then need to go failback pool

 

Pool EUR_LDS0_ITHUBPR_POOL with TTL 300

pool GLOBAL_LDS0_POOL with TTL 300

pool GLOBAL_FAILBACK_LDS0_POOL with TTL 60

 

when DNS_REQUEST {

 if [{ [IP::addr [IP::client_addr] equals 10.235.24.64/27] and ([active_members EUR_LDS0_ITHUBPR_POOL] > 0) } {

   pool EUR_LDS0_ITHUBPR_POOL

    } else {

   pool GLOBAL_FAILBACK_LDS0_POOL }

}

  else {

   pool GLOBAL_LDS0_POOL

  }

 }

 

Appreciate any help on this.

1 ACCEPTED SOLUTION

crodriguez
F5 Employee
F5 Employee

Data groups are available in BIG-IP DNS systems. Per the support article SanjayP identified, in versions prior to 12.1, you can only configure them using TMSH as a workaround. In v12.1 and later, you can also define them using the Configuration utility at DNS > GSLB > Delivery > iRules > Data Group List.

 

In the datagroup, you could define the IP network address ranges as the key and the associated pool name to use as the value. For example:

0691T00000CnGKwQAN.png(In case the screen shot is too small...)

(tmos)# list /ltm data-group internal special_pool_ips ltm data-group internal special_pool_ips { records { 10.221.152.0/24 { data AOA_LDS0_WIPRO_POOL } 10.222.152.0/24 { data EUR_LDS0_WIPRO_POOL } 10.223.152.0/24 { data AMS_LDS0_WIPRO_POOL } 10.234.24.64/27 { data AMS_LDS0_ITHUBPR_POOL } 10.235.24.64/27 { data EUR_LDS0_ITHUBPR_POOL } 10.236.24.64/27 { data AOA_LDS0_ITHUBPR_POOL } } type ip }

Then your iRule could be reduced to something like below. I included log commands to write to /var/log/gtm when testing. These should be commented out for production. I checked for a match with the data group first, before checking to see if there are active members. That way you don't have to do two comparisons on every DNS query, only on those from the special client IPs:

when DNS_REQUEST { # set default pool for load balancing pool GLOBAL_LDS0_POOL # if DNS request is from a client # with a special IP address, select # a different load balancing pool but # only if it has an available pool member. # If no available pool members, # use GLOBAL_FAILBACK_LDS0_POOL if { [class match [IP::client_addr] equals special_pool_ips] } { log local2. "Match with special_pool_ips datagroup for [IP::client_addr]" if { [active_members [class lookup [IP::client_addr] special_pool_ips]] > 0 } { log local2. "Pool [class lookup [IP::client_addr] special_pool_ips] has active members" pool [class lookup [IP::client_addr] special_pool_ips] } else { log local2. "Pool [class lookup [IP::client_addr] special_pool_ips] has no active members; using failback pool" pool GLOBAL_FAILBACK_LDS0_POOL } } }

 

View solution in original post

12 REPLIES 12

crodriguez
F5 Employee
F5 Employee

Can you share the error that you are getting, please? It will help us to pinpoint the trouble.

Hello Crodriguez,

 

Thanks.. if I modify the irule with two statement then it is accepting..

 

when DNS_REQUEST {

 if {[IP::addr [IP::client_addr] equals 10.235.24.64/27] and ([active_members EUR_LDS0_ITHUBPR_POOL] < 1 ) } {

   pool EUR_LDS0_ITHUBPR_POOL

    } 

   elseif { [IP::addr [IP::client_addr] equals 10.235.24.64/27] and ([active_members EUR_LDS0_ITHUBPR_POOL] > 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL 

   }

  else {

   pool GLOBAL_LDS0_POOL

  }

 }

 

But our intention is, have a simplified one..

 

Condition:

If the client IP match

if the pool is available then need to go normal pool (EUR_LDS0_ITHUBPR_POOL)

if the pool is not available then need to go failback pool (GLOBAL_FAILBACK_LDS0_POOL)

If the client IP don't match

go to default pool (GLOBAL_LDS0_POOL)

Hello Crodriguez,

 

Thanks.. if I modify the irule with two statement then it is accepting..

 

when DNS_REQUEST {

 if {[IP::addr [IP::client_addr] equals 10.235.24.64/27] and ([active_members EUR_LDS0_ITHUBPR_POOL] < 1 ) } {

   pool EUR_LDS0_ITHUBPR_POOL

    } 

   elseif { [IP::addr [IP::client_addr] equals 10.235.24.64/27] and ([active_members EUR_LDS0_ITHUBPR_POOL] > 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL 

   }

  else {

   pool GLOBAL_LDS0_POOL

  }

 }

 

But our intention is, have a simplified one..

 

Condition:

1.If the client IP match

  • if the pool is available then need to go normal pool (EUR_LDS0_ITHUBPR_POOL)
  • if the pool is not available then need to go failback pool (GLOBAL_FAILBACK_LDS0_POOL)

2. If the client IP don't match

  • go to default pool (GLOBAL_LDS0_POOL)

something like below. modify as needed

when DNS_REQUEST { switch [IP::addr [IP::client_addr] mask 255.255.255.224] { "10.235.24.64" { if {[active_members EUR_LDS0_ITHUBPR_POOL] > 0}{ pool EUR_LDS0_ITHUBPR_POOL } else { pool GLOBAL_FAILBACK_LDS0_POOL } } default { pool GLOBAL_LDS0_POOL } } }

 

Hello Sanjay.. It is perfectly working.. but when I tried to add additional conditions (client with /24 subnet) as below getting error

 

01070151:3: Rule [/Common/FAILBACK] error:

/Common/FAILBACK:24: error: [undefined procedure: default][default {pool GLOBAL_LDS0_POOL}]

/Common/FAILBACK:28: error: [command is not valid in the current scope][}]

 

Appreciate your help on this.

 

when DNS_REQUEST {

 switch [IP::addr [IP::client_addr] mask 255.255.255.224] {

 "10.234.24.64"

   {

   if {[active_members EUR_LDS0_ITHUBPR_POOL] > 0}{ 

  pool AMS_LDS0_ITHUBPR_POOL

} else {

  pool GLOBAL_FAILBACK_LDS0_POOL 

      return

}

}

}

 switch [IP::addr [IP::client_addr] mask 255.255.255.0] {

 "10.222.152.0"

   {

   if {[active_members EUR_LDS0_ITHUBPR_POOL] > 0}{ 

  pool EUR_LDS0_ITHUBPR_POOL

} else {

  pool GLOBAL_FAILBACK_LDS0_POOL 

      return

}

}

}

   default {

    pool GLOBAL_LDS0_POOL

    }

  }

 }

switch engine in iRule doesn't support for different subnet masks. so iRule needs to be modified. If BIGIP is licensed with only DNS module, iRule data group is not present. Following iRule can be used in that scenerio. on WIP please use default pool as "GLOBAL_LDS0_POOL"

 

when DNS_REQUEST { if {([IP::addr [IP::client_addr]/27 equals 10.235.24.64] or [IP::addr [IP::client_addr]/24 equals 10.222.152.0]) and ([active_members EUR_LDS0_ITHUBPR_POOL] > 0)}{ pool EUR_LDS0_ITHUBPR_POOL } elseif {([IP::addr [IP::client_addr]/27 equals 10.235.24.64] or [IP::addr [IP::client_addr]/24 equals 10.222.152.0]) and ([active_members EUR_LDS0_ITHUBPR_POOL] == 0)}{ pool GLOBAL_FAILBACK_LDS0_POOL } else { pool GLOBAL_LDS0_POOL } }

 

 

 If your BIGIP has iRule datagroup available, create datagroup with address for DNS servers and use below iRule

 

when DNS_REQUEST { if { ([class match [IP::client_addr] equals src-dns_datagroup]) and ([active_members EUR_LDS0_ITHUBPR_POOL] > 0) } { pool EUR_LDS0_ITHUBPR_POOL } elseif { ([class match [IP::client_addr] equals src-dns_datagroup]) and ([active_members EUR_LDS0_ITHUBPR_POOL] == 0) } { pool GLOBAL_FAILBACK_LDS0_POOL } else { pool GLOBAL_LDS0_POOL } }

 

Let us know how testing goes.

Hello Sanjay..

 

Thanks. The BIG-IP is licensed only GTM.

 

So, as below only, we can achieve our goal?

 

when DNS_REQUEST {

 if { [IP::addr [IP::client_addr]/27 equals 10.235.24.64] and ([active_members EUR_LDS0_ITHUBPR_POOL] > 0) } {

   pool EUR_LDS0_ITHUBPR_POOL

  } elseif { [IP::addr [IP::client_addr]/27 equals 10.235.24.64] and ([active_members EUR_LDS0_ITHUBPR_POOL] == 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL

  } elseif { [IP::addr [IP::client_addr]/27 equals 10.236.24.64] and ([active_members AOA_LDS0_ITHUBPR_POOL] > 0) } {

   pool AOA_LDS0_ITHUBPR_POOL

 } elseif { [IP::addr [IP::client_addr]/27 equals 10.236.24.64] and ([active_members AOA_LDS0_ITHUBPR_POOL] == 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL

  } elseif { [IP::addr [IP::client_addr]/27 equals 10.234.24.64] and ([active_members AMS_LDS0_ITHUBPR_POOL] > 0) } {

   pool AMS_LDS0_ITHUBPR_POOL

 } elseif { [IP::addr [IP::client_addr]/27 equals 10.234.24.64] and ([active_members AMS_LDS0_ITHUBPR_POOL] == 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL

  } elseif { [IP::addr [IP::client_addr]/24 equals 10.222.152.0] and ([active_members EUR_LDS0_WIPRO_POOL] > 0) } {

   pool EUR_LDS0_WIPRO_POOL

 } elseif { [IP::addr [IP::client_addr]/24 equals 10.222.152.0] and ([active_members EUR_LDS0_WIPRO_POOL] == 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL

  } elseif { [IP::addr [IP::client_addr]/24 equals 10.221.152.0] and ([active_members AOA_LDS0_WIPRO_POOL] > 0) } {

   pool AOA_LDS0_WIPRO_POOL

 } elseif { [IP::addr [IP::client_addr]/24 equals 10.221.152.0] and ([active_members AOA_LDS0_WIPRO_POOL] == 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL

  } elseif { [IP::addr [IP::client_addr]/24 equals 10.223.152.0] and ([active_members AMS_LDS0_WIPRO_POOL] > 0) } {

   pool AMS_LDS0_WIPRO_POOL

 } elseif { [IP::addr [IP::client_addr]/24 equals 10.223.152.0] and ([active_members AMS_LDS0_WIPRO_POOL] == 0) } {

   pool GLOBAL_FAILBACK_LDS0_POOL

  } else {

   pool GLOBAL_LDS0_POOL

  }

 }

There is a bug where data group is ​not available in DNS.

https://support.f5.com/csp/article/K13796

Though article says it's fixed ​from version 12.1.0, I still don't see the option on version 15.1.3.

So, if you don't see iRule data group option, then yes using IP address in the iRule itself is the option. If you have an option of creating data group, ​I would prefer it. data group would be easy to maintain in case there is addition to the existing source addresses in future.

​Above iRule seems okay and let us know how your testing goes.

crodriguez
F5 Employee
F5 Employee

Data groups are available in BIG-IP DNS systems. Per the support article SanjayP identified, in versions prior to 12.1, you can only configure them using TMSH as a workaround. In v12.1 and later, you can also define them using the Configuration utility at DNS > GSLB > Delivery > iRules > Data Group List.

 

In the datagroup, you could define the IP network address ranges as the key and the associated pool name to use as the value. For example:

0691T00000CnGKwQAN.png(In case the screen shot is too small...)

(tmos)# list /ltm data-group internal special_pool_ips ltm data-group internal special_pool_ips { records { 10.221.152.0/24 { data AOA_LDS0_WIPRO_POOL } 10.222.152.0/24 { data EUR_LDS0_WIPRO_POOL } 10.223.152.0/24 { data AMS_LDS0_WIPRO_POOL } 10.234.24.64/27 { data AMS_LDS0_ITHUBPR_POOL } 10.235.24.64/27 { data EUR_LDS0_ITHUBPR_POOL } 10.236.24.64/27 { data AOA_LDS0_ITHUBPR_POOL } } type ip }

Then your iRule could be reduced to something like below. I included log commands to write to /var/log/gtm when testing. These should be commented out for production. I checked for a match with the data group first, before checking to see if there are active members. That way you don't have to do two comparisons on every DNS query, only on those from the special client IPs:

when DNS_REQUEST { # set default pool for load balancing pool GLOBAL_LDS0_POOL # if DNS request is from a client # with a special IP address, select # a different load balancing pool but # only if it has an available pool member. # If no available pool members, # use GLOBAL_FAILBACK_LDS0_POOL if { [class match [IP::client_addr] equals special_pool_ips] } { log local2. "Match with special_pool_ips datagroup for [IP::client_addr]" if { [active_members [class lookup [IP::client_addr] special_pool_ips]] > 0 } { log local2. "Pool [class lookup [IP::client_addr] special_pool_ips] has active members" pool [class lookup [IP::client_addr] special_pool_ips] } else { log local2. "Pool [class lookup [IP::client_addr] special_pool_ips] has no active members; using failback pool" pool GLOBAL_FAILBACK_LDS0_POOL } } }

 

Nice example and looks better optimized one.

Hello Sanjay,

 

Thanks for your advice on this.

 

Regards,

Kannan.

Hello Crodriguez,

 

Thank you very much for your on this.

 

I applied the change and waiting for application team's test result.

 

Regards,

Kannan.