cluster
19 TopicsBackend nodes goes unreachable from active F5
Hi Team , We are facing node reachability issue from Active f5 on evryweeknd (sunday ) but this happens only from the Active F5 and for few VIP/pools only .. We simply failover the f5 to standby and issue resolves and then failback to standby .. Has anyone faced such issue ? We have virtual appliance configured on the ESXi host . Before opening a TAC case , Can anyone confirm if you have faced similar problem ? I did not find anything on the audit logs which indicates some sceduled jobs running at tht time ..What else can be checked ?999Views0likes6CommentsAdding Viprion Blade Management Address in Production
Hi every one. I have 2 Viprion with 2 blade cluster identic on each viprion. i have situation were when i create a new vcmp guest on Viprion02 and assign an ip address, that ip address is unreachable from the network (all other vcmp guest is reachable). When i checked on the bash linux for the mgmt address, there is no ip address were on the system-platform i can see the same ip address for the management. But this issue is not happen on Viprion01. I take a look for the viprion02 cluster blade. Only cluster IP is configured and No blade management ip is configured. Not like in Viprion01, All blade have its own management IP. I would like to add management ip address on Viprion02 blade. 1. Will there is an impact on the production? I have tried read about the viprion and assume it should not give any impact due to management ip is distinct with management blackplane were used for blade inter communication, traffic handling, and sync. 2. Which HA is that doc talking about? Blade HA is active active right? And Guest HA using its own management IP, not the blade management address. Thank You799Views0likes1CommentCannot join devices into cluster
Hello, I am trying to join two devices into a device group on 11.5.3 I did the following things; Added HB vlan aswell as other VLANs Set NTP and DNS servers Added Self IP in HB vlan (with port lockdown: allow all) Set device ConfigSync to HB self ip Set network failover addresses to HB self ip & mgmt ip Set mirroring address to HB self ip Set certificate for mgmt (during the wizard, so nothing special) Reset device trust, added other devices mgmt ip with correct credentials Ensured local device can reach the mgmt webui of the remote one (curl) Ensures local device can ping the HB self ip of the remote device netstat -pan | grep -E 6699 shows no connections Now when I look at the device group, I see 'Disconnected' for the local and the remote machine. Ideas? tmsh show net self; http://pastebin.com/raw/SsMkPcPP tmsh show net vlan; http://pastebin.com/raw/XZkUcsrrSolved699Views0likes10CommentsBIG-IP : configure 2-node pool as simple failover
F5 BIG-IP Virtual Edition v11.4.1 (Build 635.0) LTM on ESXi I have a RESTful service deployed on two servers (with no other sites/services). I've configured BIG-IP as follows : single vip dedicated to service single pool dedicated to service two nodes , one for each server one health monitor which determines health of each node I need to configure this cluster as simple failover where traffic is sent only to primary. So, if node 1 is primary and it fails health monitor, node 2 is promoted to primary and handles all traffic. How to configure BIG-IP ?610Views0likes3CommentsBigIP 12.0 Active-Active cluster vs Active-Passive cluster
I found a document for BigIP 10.0 that states that "Active/standby mode is the recommended mode for redundant system configuration". I'm trying to find the same information for BigIP 12.0. Is this still the case for 12.0? I have two BigIP 2000 series units clustered.Solved536Views0likes1CommentBIG-IP active-standby configurations failed
We are working on a new proyect about F5 Big IP Link Controler. For this implementation We did the steps-by-step installation describes in the follow document: http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/tmos-implementations-11-4-0/2.html Well everything it was fine until the Discovery Peer process; The problem was that the second device not discovered the first device and finally the two devices haven't a trust relationship and the Big IP appliances do not works as an active-standby pair. The messages error is: [CheckHandlers.cpp:83 checkDevic] System time on 192.168.50.100 off by -1800 seconds Hello, We configure an NTP server on both device. Also we used Setup utility to configure the new devices as an active-standby pair on several times. For this step we follow this article –> https://support.f5.com/kb/en-us/solutions/public/10000/200/sol10240.html Besides, We check out the Time zone and Clock on both appliances and everything looks fine. By the way We are on time zone America/Caracas (UTC -4:30). Thanks in advance501Views0likes6CommentsBIG-IP : sync-failover device-group sync options
F5 BIG-IP Virtual Edition v11.4.1 (Build 635.0) LTM on ESXi For a device-group of type sync-failover, the admin browser provides these options : Automatic Sync Full Sync Maximum Incremental Sync Size (KB) Could someone please explain these options ? Are syncs on a time-schedule ? Or are syncs triggered by any change to the primary ? Or exactly what types of changes will trigger a sync ? Specifically, will a change to a data-group file ( e.g. add/delete a line ) trigger a sync ?498Views0likes8CommentsViprion 2400 B2250 run two clusters?
Hello, I have redundant 2400 chassis with a single B2250 blade in each. vCMP guests are configured sync failover between chassis, There is no HA at the Viprion Level. I need to setup two blades for a DR Bubble environment needs to be isolated from the existing blade in the default cluster and the vlans cannot mix between the blades. Is it advisable to run a second cluster for the slot 2 and 3 blades to segment the networks? I dont see much documentation on this so any help is appreciated.384Views0likes1CommentiRules LX config sync in cluster with workspaces, how?
Hi all, A customer of us is using with success the iRules LX and he likes it very much to do some REST-API security. Now he has a iRules LX workspace which is located in the path /var/ilx/workspaces. He is using a Big-IP cluster and is doing a config sync to the standby node. Unfortunately the iRules LX workspaces are not synced to the other machine... BigIP Version is v12.1.1. Since I couldn't find any information about iRules LX and clustering I ask it here: How should we proceed when the active cluster member is down and the created iRules LX? Many thanks for an answer. Peter368Views0likes6CommentsIs it possible to have a multi-chassis VIPRION cluster to further increase throughput?
Is it possible to have a multi-chassis VIPRION cluster to further increase throughput? Is it supported? I am trying to design a APM-based remote access TLS VPN solution supporting 200,000 concurrent users providing 100Mbps of bandwidth to each user. As a result, it will need to support 20Tbps of bulk crypto. Since each VIPRION 4450 Blade is limited to a maximum of 80 Gbps of bulk crypto, I calculate that this would require 250 VIPRION 4450 Blades in parallel. Since each VIPRION 4800 Chassis only supports a maximum of 8 VIPRION 4450 Blade, this will exceed the limitations of a single VIPRION 4800 Chassis cluster solution. Therefore, it is possible to have a multi-chassis VIPRION cluster?340Views1like0Comments