Forum Discussion
Brendan_Hogan_9
Nimbostratus
Jun 26, 2009IRule to selectively allow subnets no longer working
Actually 2 issues:
1) We are currently on
We used to use the following iRule during maintenance windows to only allow particular subnets to connect. It used to send users not in those subnets to a fallback page. The last 2 times I enabled it users from outside those subnets were able to connect to the application no problem and were not getting the redirect. I know at one point we had upgraded to BIG-IP 9.3.1 Build 37.1 and also had many network changes but since we only need this iRule several times a year per application I really could not attribute a specific change to it no longer working correctly. Any ideas what might need to be changed in this iRule to make it work again?
when HTTP_REQUEST {
if { [IP::addr [IP::client_addr]/24 equals 100.100.100.100] } {
pool sa89prod }
elseif { [IP::addr [IP::client_addr]/24 equals 200.200.200.200] } {
pool sa89prod }
elseif { [IP::addr [IP::client_addr]/22 equals 10.10.10.10] } {
pool sa89prod }
else {
HTTP::redirect "https://x.y.com"
}
}
2) Not sure where to post this - not an iRule issue but similar to issue above. We used to disable nodes within a pool to prevent any new connections to "bleed" users off for maintenance on a particular server within a pool. We don't want to interrupt a current session. Any Best Practices suggestions as to how to better accomplish this - maybe an iRule that might better accomplish this? We know the users are sometimes leaving their session open. One application in particular times a session out out after 20 minutes of inactivity but best I can tell these are still showing up in the pool statistics as a connection. Basically we disable the node and then watch the pool statistics to have close to zero connections but what I am finding is the connections just seem to keep coming in. Any timeout sessions on BigIP side I should look at?
- hoolio
Cirrostratus
Hi, - Brendan_Hogan_9
Nimbostratus
1. I'll try the OneConnect profile - hoolio
Cirrostratus
For HTTP traffic the TCP idle timeout shouldn't typically need to be set above 5min. Even that can be lowered for high volume web apps. You might do better to create a custom TCP profile for whatever virtual server needs a very long TCP idle timeout and leave the default profile at 300 seconds. You can create a second custom profile for a lower timeout for this particular app if you want. - Brendan_Hogan_9
Nimbostratus
Thank you Aaron. Forcing a node offline is exactly what I needed!
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects