afedden_1985
Jul 12, 2012Cirrus
Persistence and Oneconnect
I'm looking for clues as to why source address affinity isn't working correctly in my configuration.
We have a VIP with 100s of applications and have had several design issues getting it all working. The design we use is full of work arounds to compensate for things th LTM can't do and I'm wondering if anyone else has set up this configuration and if they have seen any issues with source address persistence?
the design is a front end vip with iRule that inspects ingress URIs.
The iRule logic says
if URI matches data group for Web Accelerator send to WAM
if it is dynamic content or Post load balance without WAM
if URI matches data group for Windows or Webseal send to Virtual
The issue I'm seeing is with the back end Virtual what I will refer to as the sub VIP which uses 192.168.100.x addresses that do not have a matching SNAT IP so the original client IP is passed to the sub VIP.
the flow: Client to front end VIPVirtualPool
the front end vip has SNAT, One Connect with mask all 0s no persistence profile and an iRule for L7 decisions
the subvip has SNAT, NO oneconnect profile, and source address affinity profile and a default pool
The issue I'm seeing is with the sub VIP. Its load balancing correctly but is not maintaining persistence to the pool member. TCP dumps confirm by examining the x-forwarded-for IP that my connections are being sprayed across all 4 pool members and source address affinity is not working.
has anyone seen this or have any ideas why this could happen? One comment I heard was the front end vip cone connect mask of all 0s may be the issue and that I might try changing it to all 1s 255.255.255.255. I have to have a one connect profile since we need to make l7 decisions on every get or post not just the 1st one.
let me know if you have any ideas, I will say I changed one of these sub VIPs to use cookie insert persistence and that fixed the issue for the application that was using that Virtual sub VIP.
Alan F
*****************
Well support gave me an update that seems to address why persistence was not working. It seems using one connect with all 0s mask on the front end vips is the issue when using sub vips. Traces show my source IP connected to the sub vip but F5 was reusing my connection for other clients. when the session hit the sub vip it's session table was mapped to send my source Ip and port to a particular server. If my request arrived on a diffetent one connect tunnel I could get load balanced to where ever that tcp session was mapped to based on that Source IP. Lesson learned, in this configuration set the one connect mask to all 1s on the front end vip.