carp
4 Topicssource IP and source Port persistence using irule - Citrix - (carp vs uie)
Hi, We ran into an issue of uneven load-balancing due to using citrix. Clients end up using the same IP so we decided we need to start load-balancing using the source port as well. I have done my homework and search around until I came across multiple solutions of either to use uie or carp. I have multiple questions hopefully I will get answers for. I understand carp doesn't have a timeout so that leads to a question is it better to use in this situation? Also we are leaning towards load-balancing using the least connections. Would each algorithm limits to a specific load-balancing method? Per my irule below I don't add persist assuming it is done automatically. am I wrong with that assumption? Should I be adding each successful persistence records? what would be the best way to test such an implementation? Here is the irule I'm about to implement. when CLIENT_ACCEPTED { set client_ip_port "[IP::client_addr]:[TCP::client_port]" if {[TCP::client_port] and [IP::client_addr] !=0} { persist carp $client_ip_port } }470Views0likes1CommentSource address persistence w/ CARP algorithm - hash calculation
Does anyone have additional information or a link to detailed resources on an F5 site in how the LTM uses the CARP algorithm to calculate hashes when used in a 'source address persistence' profile? I have a virtual service handling HTTP in SSL passthrough mode. The service utilizes a pool of two members which connect to an SSO service on the back-side to perform user authentication and SSO. According to the application admin all user connections must pass through the same pool member to take advantage of the SSO session. We are currently using source-address persistence but the application admin raised issues about some sessions 'breaking' which he attributes to a persistence expiration and client being re-balanced to the other server. I'm looking at enabling the CARP hashing algorithm on the source-address persistence profile to remedy this, with the thought being that the deterministic result of CARP hashing will resolve issues of client being re-balanced to a new server during the 'session', while also eliminating any persistence records on the LTM. Application admin's concern is the effect on the balance of connections between the two servers. Is the implementation of F5 CARP hashing with the source-address profile documented as far as how the hashing would be performed given source address and available pool members? Thanks! -Ed302Views0likes2CommentsHASH Carp Persistence
Hi guys, I've just configured HASH persistence using CARP Algorithm, as describe in this document: http://support.f5.com/kb/en-us/solutions/public/11000/300/sol11362.html I've defined a persistence profile with this options: Hash Algorithm: CARP iRule: Select the iRule you created which contains the persist hash command. Timeout: 0 seconds This is the iRule: when CLIENT_ACCEPTED { persist carp [IP::client_addr] } This Persistence Profile has been applied in a Virtual Server type "Performance Layer (L4)" port UDP. I've noticed something strange, active connections has increased around 35-40%, is this normal?, using this algorithm active connections have to be higher? Thanks for your help, Ron881Views0likes28CommentsProblem with session persistence using CARP when load balancing a McAfee Web Gateway cluster using progress page for downloads
We have a cluster of 14 McAfee Web Gateways and about 15000 users connecting to them from a few dozen Citrix farms. Previously we have been using source address persistence, which works fine until one of the pool members are taken offline then online again. All clients will then be load balanced to another available pool member and the one that was offline gets no traffic after that. Enter hash persistence using CARP. The idea is simple, use something like the host header and make a hash of it then load balance using the CARP algorithm. This also works great, except when downloading files. McAfee Web Gateway works like this; it downloads the file for malware scanning before delivering it to the client. Meanwhile it displays a progress page to the client. The problem is that with hash persistence quite often the progress page will show an error. This is because I get loadbalanced to a different pool member than the one showing me the progress page. I really would like to use hash persistence, but I'm not sure there is a proper workaround for this. Any suggestions? What are you guys doing for persistence to web caches?393Views0likes5Comments