How to force close TLS sessions in a failover scenario
Hi, We have an application behind Big-IP which doesn't handle failovers well. The Big-IP keeps all TLS sessions consistent and open during failover but the application doesn't support TLS resume for a session and this causes problems in the app. I'm looking for a way to close TLS sessions for a specific VS in a failover scenarios. We're on version 16.1.4.1 Any suggestions? Thanks484Views0likes5CommentsVLAN Failsafe failover settings change on STANDBY device - affect ACTIVE device?
we have two devices in an HA group but the failsave is VLAN and set to fail over on both devices. If I turn off VLAN failsafe on the standby device, does that affect the HA group or ACTIVE device?Solved399Views0likes1CommentPoll members not stable after failover
Hi, Our setup: - two vcmp guests in HA (viprion with two blades) - ~10 partitions - simple configuration with LTM, AFM. nodes directly connected to f5 device (f5 device is default gw for nodes). - sw 16.1.3.3, after upgrade 16.1.4 ^^ this setup in two data centers. We are hitting interesting behaviour in first data center only: - second f5 guest is active: pool members monitors (http and https) respond without problem. everything is stable. this is valid for both f5 devices in HA. - after failover (first f5 guest is active): pool members response is not stable (not stable for https monitor, http is stable again). sometimes are all pool members down, then virtual server is going down. ^^ it looks like a problem on node side, but it's not, because when second f5 device is active, everything is stable. This issue is hitting almost all partitions. We checked: - physical interface: everything is stable, no error on ports, ether-channels (trunks). - arp records: everything looks correct, no mac flapping - spanning tree: stable in environment - routing: correct, default gw on node side: correct, subnet mask: correct on nodes and both f5 devices. floating addresses is working correctly (including arp in network) - log on f5 devices: without any issue connected to this behaviour. I don't know what else connected to this issue we can check. Configuration for all f5 devices (2x dc1, 2x dc2 - two independed ha pairs) is the same (configured with automation), sw version is the same (we did upgrade to 16.1.4 two days ago). It looks that someting is "blocked" on first f5 device in dc1 (reboot or upgrade is not solving our issue). Do you have any idea what else to check?467Views0likes2CommentsGTM - Topology load balancing failover when one pool is down
Hello All, I am looking for a solution to the problem that has been raised several times, but I do not find a confirmed solution. The situation I am in is described in the following post:GTM Topology across pools issue when one of the po... - DevCentral (f5.com) We have two topology records with the same source, but different destination pools, with different wights: SRC: Region X => DEST: Pool A, wieght 200 SRC: Region X => DEST: Pool B, Wieght 20 When Pool A is down the Topology load balancing for the Wide IP still selects Pool A which is down, and no IP is returned to the client. If the topology load balancing selection mechanism, is not going to take in the status of the destination pool and just stop on first match in its selection mechanism, then why have "Wieght" at all.I do no believe disabling "longest match" would help as this just affects the order the topology rules are searched, it woudl still stop with the first match. The often mentioned solution is to use a single pool with Global Availability load balancing, as mentioned in the post:GTM and Topology - DevCentral (f5.com). The problem I have is that Pool A and Pool B are pools with mulitple generic host servers. I cannot have a pool with all generic host in it as we want to memebers in each Pool are Active/Active and not Active/ Backup Many thanks, Michael1.7KViews0likes11CommentsBIG-IP VE VMWare Cluster HA triggering configuration
Hi, this is my first step into BIG-IP VE deployments (always viprion so far). I have all my test clusters up & running in a VMWare environment: Active/Stanby using dedicated vNIC&VLAN. 4 vNIC per device, 2 cluster members, each one running at different ESXi. But I would like to ccountercheck which would be the best option to trigger HA. At PHY deployments we deploy HA Group based on trunks. Now this does not work for all cases. Would a failsafe condition based on VLAN be the best solution? E.g. with failover to the sdby BIG-IP in case no ARP was received from client_VLAN gateway? any comment wellcome! Regards.Solved1.3KViews0likes2Comments