source address affinity
2 TopicsHow to delete an existing persistence and redirect the request to a node when a uri is matched.
Hi All, We currently have this iRule association on LTM VIP. It does a conditional check on the uri and if it meets criteria then it says to use a particular node - Node A(CMS console) else it will use the default pool. We are using source address affinity persistence with a timeout of 3600 secs for the default pool. So the problem occurs when the admin is connected to a different pool member and wants to go to the admin console by using the url "www.mywebserver.com/cms". Once he is connected to the admin console, we see a persistence record of the earlier connection(lets say he was connected to Node B) in the persistence table (I am sure it is because of the 3600 secs timeout value) and user gets logged out frequently. To resolve this issue, I am thinking to remove the existing persistence record when a user wants to get to CMS console or Node A and also would like the Node A to use source address affinity as its persistence method & increase its timeout to 7200 secs. All suggestions are welcomed. iRule we currently have: when HTTP_REQUEST { set http_uri [string tolower [HTTP::uri]] if { $http_uri starts_with "/cms" } { node 172.18.32.115 } elseif { $http_uri starts_with "/util" } { node 172.18.32.115 } } iRule i wanted to configure - Not sure of the syntax to do this if you can please help me with this. when HTTP_REQUEST { set http_uri [string tolower [HTTP::uri]] if { $http_uri starts_with "/cms" } { to delete the existing persistence record persist delete source_addr [IP::client_addr] to use source address persistence with timeout as 7200 secs persist source_addr 7200 and then select the CMS server node 172.18.32.115 } elseif { $http_uri starts_with "/util" } { persist delete source_addr [IP::client_addr] persist source_addr 7200 node 172.18.32.115 } }516Views0likes4CommentsSource address persistence troubleshooting
Hi, I am looking a way to find out if source address persistence is working correctly. There are two VS (Standard TCP IP:any and Standard UDP IP:any both with pools pointing to the same nodes) with attached profile: ltm persistence source-addr lamp_persist_match_vs { app-service none defaults-from source_addr description none hash-algorithm default map-proxies enabled map-proxy-address none map-proxy-class none mask none match-across-pools disabled match-across-services disabled match-across-virtuals enabled mirror disabled override-connection-limit disabled partition Common timeout 32400 } match-across-virtuals is enabled so after first client connection to any of VSs following connections should go to the same node for at least 32400 s. I am looking a way to find out if it is working like that. So I need to figure out if given client IP is for at least defined timeout ONLY directed to the same IP. In other words I need to catch exception when given client IP is rebalanced to other IP during timeout period. Any ideas how to do that? I was thinking about using iRule with iStats but I am not so good with iStats stuff. My first approach was to create code like that: when LB_SELECTED { set node_sel [LB::server addr] log local0. "Selected node is $node_sel" ISTATS::incr "ltm.virtual [virtual name] node $node_sel client.ip [IP::client_addr] c balanced" 1 ISTATS::incr "ltm.virtual [virtual name] c [IP::client_addr]-${node_sel}" 1 ISTATS::incr "ltm.virtual [virtual name] c count_it" 1 log local0. "Currents counter is: [ISTATS::get "node $node_sel client.ip [IP::client_addr] c balanced"]" } Tried different syntax but I am not really sure if that's a way to go. Goal is to be able to collect all nodes given client IP connected during timeout period. If there is not persistence issue just one entry should be created listing client IP and selected node, if there is issue I expect that two entries will be created (pool contains two pool members). Piotr300Views0likes0Comments