cluster
19 TopicsCannot join devices into cluster
Hello, I am trying to join two devices into a device group on 11.5.3 I did the following things; Added HB vlan aswell as other VLANs Set NTP and DNS servers Added Self IP in HB vlan (with port lockdown: allow all) Set device ConfigSync to HB self ip Set network failover addresses to HB self ip & mgmt ip Set mirroring address to HB self ip Set certificate for mgmt (during the wizard, so nothing special) Reset device trust, added other devices mgmt ip with correct credentials Ensured local device can reach the mgmt webui of the remote one (curl) Ensures local device can ping the HB self ip of the remote device netstat -pan | grep -E 6699 shows no connections Now when I look at the device group, I see 'Disconnected' for the local and the remote machine. Ideas? tmsh show net self; http://pastebin.com/raw/SsMkPcPP tmsh show net vlan; http://pastebin.com/raw/XZkUcsrrSolved709Views0likes10CommentsBIG-IP : sync-failover device-group sync options
F5 BIG-IP Virtual Edition v11.4.1 (Build 635.0) LTM on ESXi For a device-group of type sync-failover, the admin browser provides these options : Automatic Sync Full Sync Maximum Incremental Sync Size (KB) Could someone please explain these options ? Are syncs on a time-schedule ? Or are syncs triggered by any change to the primary ? Or exactly what types of changes will trigger a sync ? Specifically, will a change to a data-group file ( e.g. add/delete a line ) trigger a sync ?509Views0likes8CommentsBackend nodes goes unreachable from active F5
Hi Team , We are facing node reachability issue from Active f5 on evryweeknd (sunday ) but this happens only from the Active F5 and for few VIP/pools only .. We simply failover the f5 to standby and issue resolves and then failback to standby .. Has anyone faced such issue ? We have virtual appliance configured on the ESXi host . Before opening a TAC case , Can anyone confirm if you have faced similar problem ? I did not find anything on the audit logs which indicates some sceduled jobs running at tht time ..What else can be checked ?1KViews0likes6CommentsAdding Viprion Blade Management Address in Production
Hi every one. I have 2 Viprion with 2 blade cluster identic on each viprion. i have situation were when i create a new vcmp guest on Viprion02 and assign an ip address, that ip address is unreachable from the network (all other vcmp guest is reachable). When i checked on the bash linux for the mgmt address, there is no ip address were on the system-platform i can see the same ip address for the management. But this issue is not happen on Viprion01. I take a look for the viprion02 cluster blade. Only cluster IP is configured and No blade management ip is configured. Not like in Viprion01, All blade have its own management IP. I would like to add management ip address on Viprion02 blade. 1. Will there is an impact on the production? I have tried read about the viprion and assume it should not give any impact due to management ip is distinct with management blackplane were used for blade inter communication, traffic handling, and sync. 2. Which HA is that doc talking about? Blade HA is active active right? And Guest HA using its own management IP, not the blade management address. Thank You819Views0likes1CommentIs it possible to have a multi-chassis VIPRION cluster to further increase throughput?
Is it possible to have a multi-chassis VIPRION cluster to further increase throughput? Is it supported? I am trying to design a APM-based remote access TLS VPN solution supporting 200,000 concurrent users providing 100Mbps of bandwidth to each user. As a result, it will need to support 20Tbps of bulk crypto. Since each VIPRION 4450 Blade is limited to a maximum of 80 Gbps of bulk crypto, I calculate that this would require 250 VIPRION 4450 Blades in parallel. Since each VIPRION 4800 Chassis only supports a maximum of 8 VIPRION 4450 Blade, this will exceed the limitations of a single VIPRION 4800 Chassis cluster solution. Therefore, it is possible to have a multi-chassis VIPRION cluster?343Views1like0CommentsConfig sync for ASM module not impacting other modules
Hello experts, Need an advice how to properly sync ASM policy/configuration between different devices. I have an environment with a sync-failover cluster consisting of 2 F5 devices in each data centre so in total - 4 devices. Each cluster runs APM, LTM and ASM. What I want is to configure sync only between clusters for ASM module not impacting other modules. So if I make ASM change on a cluster in 1st DC the change is synced to 2nd DC cluster. All other changes for LTM/APM are synced between devices in the particular DC cluster only - not propagated between clusters in different DCs. Would this be possible? Is there any guide / KB to implement this? Thanks, Roman304Views0likes2CommentsViprion 2400 B2250 run two clusters?
Hello, I have redundant 2400 chassis with a single B2250 blade in each. vCMP guests are configured sync failover between chassis, There is no HA at the Viprion Level. I need to setup two blades for a DR Bubble environment needs to be isolated from the existing blade in the default cluster and the vlans cannot mix between the blades. Is it advisable to run a second cluster for the slot 2 and 3 blades to segment the networks? I dont see much documentation on this so any help is appreciated.390Views0likes1CommentHTTP Monitor sends multiple requests with F5 in cluster
We have configured a HTTPS custom monitor to check a particular URL in the application, the problem is that our production environment has 4 F5 Big-IPs configured in cluster, we are facing a situation where each node of the cluster sends HTTP request to monitor the web servers so with a configured Interval of 10 seconds, we are actually reeving 4 HTTP calls every 10 seconds and not only one every 10 seconds as we would expect. Does anyone know if this is the default behavior for F5 and /or is there a way to have only 1 call every 10 seconds even if the F5 is in a cluster? Thank you232Views0likes1CommentiRules LX config sync in cluster with workspaces, how?
Hi all, A customer of us is using with success the iRules LX and he likes it very much to do some REST-API security. Now he has a iRules LX workspace which is located in the path /var/ilx/workspaces. He is using a Big-IP cluster and is doing a config sync to the standby node. Unfortunately the iRules LX workspaces are not synced to the other machine... BigIP Version is v12.1.1. Since I couldn't find any information about iRules LX and clustering I ask it here: How should we proceed when the active cluster member is down and the created iRules LX? Many thanks for an answer. Peter399Views0likes6Comments