Forum Discussion
AndyC_86542
Nimbostratus
Jan 26, 2009Strange persist behaviour
I have this little section of code in an iRule that decides which of various pools to send an HTTP request to. I wanted to make sure that requests from a single client go through the same proxy server within squid-proxy-pool.
set sourceIPAddress [IP::client_addr]
pool squid-proxy-pool
persist source_addr 7200
set pResult [persist lookup source_addr $sourceIPAddress]
log local0.debug "ip: $sourceIPAddress - persist result: $pResult"
I thought that this would do what I wanted and just added the last two lines to debug a separate problem. However, when I look at the logs, I get results like the following (dates removed from timestamps for readability):
16:30:12 ip: 192.168.20.221 - persist result: squid-proxy-pool 172.22.21.250 80
16:30:14 ip: 192.168.20.221 - persist result: squid-proxy-pool 172.22.21.249 80
16:30:16 ip: 192.168.20.221 - persist result: squid-proxy-pool 172.22.21.249 80
16:30:17 ip: 192.168.20.221 - persist result: squid-proxy-pool 172.22.21.249 80
16:30:17 ip: 192.168.20.221 - persist result: squid-proxy-pool 172.22.21.250 80
Am I completely missing the point with persist or is this really spattering requests all over the pool without regard to the source_addr setting?
14 Replies
- Deb_Allen_18Historic F5 Accountthat sure doesn't look right.
Did you examine the LTM log to see if the pool members changed state during that time due to monitor activity?
(You don't mention what the other problem you were troubleshooting is, might be related...)
/d - AndyC_86542
Nimbostratus
Hi Deb,
The original problem was some odd delays/hanging at the squid proxy. I don't think it has any bearing on the persistence problem though.
There's nothing in the logs that I can see, although I am not sure what you meant by the LTM log (I'm looking at local traffic logs in the F5 web interface). There is only gateway_icmp monitoring on the pools at the moment.
I did notice something else odd. This is the persistence table data under pools->squid-proxy-pool->statistics.Persistence value Persistence Mode Virtual Server Pool Pool Member Age 172.22.20.220 Source Address Affinity catchall-virtual squid-proxy-pool 172.22.21.249:80 4817 seconds 172.22.20.220 Source Address Affinity catchall-virtual squid-proxy-pool 172.22.21.249:80 1217 seconds 172.22.20.228 Source Address Affinity catchall-virtual squid-proxy-pool 172.22.21.250:80 731 seconds 172.22.20.228 Source Address Affinity catchall-virtual squid-proxy-pool 172.22.21.249:80 731 seconds
As you can see, there are two entries for each source IP address. Is it possible that multiple requests arrive simultaneously and are round robin load balanced over the pool, with each request causing an entry in the persistence tables. If so, how do I avoid that? If not, does anyone have an idea what's going on.
Cheers
Andy - hoolio
Cirrostratus
Hi Andy,
Are clients making requests to multiple VIPs for this traffic? Can you post an anonymised copy of the VIP and pool configurations using 'b virtual VIP_NAME list' and 'b pool POOL_NAME list'?
Thanks,
Aaron - AndyC_86542
Nimbostratus
Hi Aaron,
Output for VIPs, pools and iRule as follows (I've replaced webcache by 8080 for the port for clarity on the login pool)virtual catchall-virtual { destination any:http mask none ip protocol tcp rules session-management profiles http tcp translate address enable } pool login-pool { monitor all gateway_icmp members 172.22.21.137:8080 172.22.21.147:8080 down session disable } pool squid-proxy-pool { monitor all gateway_icmp members 172.22.21.249:http 172.22.21.250:http } rule session-management { when RULE_INIT { set ::asm_bypass 0 set ::cookieName "wibble" } when HTTP_REQUEST { set hasCookie [HTTP::cookie exists $::cookieName] if {!($hasCookie)} { pool login-pool } else { Cookie found <...some URL parameter tweaking code removed...> Use persist by client ip address with a timeout of 2 hours set sourceIPAddress [IP::client_addr] pool squid-proxy-pool persist source_addr 7200 set pResult [persist lookup source_addr $sourceIPAddress] log local0.debug "ip: $sourceIPAddress - persist result: $pResult" } } }
I've tried having the persistence defined on the VIP (with no default pool) instead of in the iRule, with it just in the iRule and with it in both places. All three give the same result of 2 entries in the persistence table for each source IP.
If source address affinity is turned on for the VIP we also get double entries for each login-pool, VIP, source address combination even though one of the pool members is disabled.
We don't get double entries if there is no iRule (i.e. boring load balanced servers with source address affinity).
Cheers
Andy - hoolio
Cirrostratus
I don't see anything obvious in the config that would cause this. For testing, can remove the source address persistence profile from the VIP? Can you also clear the persistence records for the VIP (or wait for them to clear) and then sprinkle your iRule generously with this:
log local0. "Current persist record for [IP::client_addr]: [persist lookup source_addr $sourceIPAddress]"
Can you also run 'b persist all show all' once you see the problem occur? Please post anonymized copies of the logs and the 'b persist' output.when CLIENT_CONNECTED { log local0. "[IP::client_addr]:[TCP::client_port]:\ [persist lookup source_addr [IP::client_addr]], new connection" } when HTTP_REQUEST { log local0. "[IP::client_addr]:[TCP::client_port]:\ [persist lookup source_addr [IP::client_addr]], request, URI: [HTTP::uri]" set hasCookie [HTTP::cookie exists $::cookieName] if {!($hasCookie)} { log local0. "[IP::client_addr]:[TCP::client_port]:\ [persist lookup source_addr [IP::client_addr]], no cookie, using login-pool, URI: [HTTP::uri]" pool login-pool } else { log local0. "[IP::client_addr]:[TCP::client_port]:\ [persist lookup source_addr [IP::client_addr]], cookie, using squid-pool, pre-persist, URI: [HTTP::uri]" Cookie found <...some URL parameter tweaking code removed...> Use persist by client ip address with a timeout of 2 hours pool squid-proxy-pool persist source_addr 7200 log local0. "[IP::client_addr]:[TCP::client_port]:\ [persist lookup source_addr [IP::client_addr]], cookie, using squid-pool, post-persist, URI: [HTTP::uri]" } }
Aaron - AndyC_86542
Nimbostratus
Hi,
Thanks for the tips on logging. Having tried Aaron's logging suggestion, a pattern seems to be emerging. I've not included the whole logs as I've been through a lot to notice the pattern. Basically, I get consistent persistence within a tmm. If I trim the log to just the LB_SELECTED logging I get something like:
This is the logging line:when LB_SELECTED { log local0. "[IP::client_addr]:[TCP::client_port] :[LB::server]" }
and this is the output:tmm tmm[ 1835 ] Rule test LB_SELECTED: 172.22.20.221:4918 squid-pool 172.22.21.249 80 tmm tmm[ 1835 ] Rule test LB_SELECTED: 172.22.20.221:4918 squid-pool 172.22.21.249 80 tmm1 tmm1[ 1861 ] Rule test LB_SELECTED: 172.22.20.221:4777 squid-pool 172.22.21.250 80
tmm will give consistent load balancing to one address and tmm1 will give consistent load balancing to another address (on some test runs, randomly, they are the same address).
I don't know what the host/service values in the log mean, but I'm guessing this could be something to do with CMP (clustered multi-processing).
I'm running on a BIG-IP 1600 9.4.5 Build 1049.10 Final.
Can anyone enlighten me?
Cheers
Andy - dennypayne
Employee
You're probably on to something, I would try turning off CMP on that virtual and see what happens. I think the syntax is: bigpipe virtual catchall-virtual cmp disable.
Denny - AndyC_86542
Nimbostratus
I tried disabling CMP on the virtual IP that gives the problem and then on all virtuals. No change. Still getting tmm and tmm1 in the logs and showing the same behaviour. - Jos_52767Historic F5 AccountIf source address persistence does not work within the VIP, then perhaps you can try disabling it and using UIE persistence in the iRule instead - https://support.f5.com/kb/en-us/solutions/public/7000/300/sol7392.html. The example in SOL7392 talks about persisting on the value of JSESSIONID, but you should be able to persist on [IP::client_addr].
- AndyC_86542
Nimbostratus
Hi Jos,
I tried converting everything to UIE persistence based on [IP::client_addr] instead of source_addr persistence and got exactly the same results. I also tried that after doing "b virtual all cmp disable" - same result.
I've now resorted to limiting the BIGIP1600 to one tmm by doingbigpipe db Provision.tmmCount 1
from the console and rebooting [as suggested in https://support.f5.com/kb/en-us/solutions/public/7000/700/sol7751.html].
That seems to have stopped the problem, but presumably I've halved the capacity of my F5 by doing it which is not ideal.
Is there some obscure console command that forces a single persistence table shared between TMMs rather than one each?
Cheers
Andy
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects