high availability
29 TopicsAdding New BigIP Units Using Free CPUs on i7800 (LB13 & LB14) to an existing HA Pair
Hello, We have a pair of i7800 devices, each running two guests with the following configuration: i7800 Unit 1: LB09 & LB11 i7800 Unit 2: LB10 & LB12 Current Setup: LB09 & LB10: These are part of a High Availability (HA) pair, running in an Active/Active configuration. LB11 & LB12: Similarly, these two guests are also in an HA pair, running in Active/Active mode. Each LBx unit is utilizing 6 CPUs (which is the maximum allowed per guest). The i7800 comes with 14 CPUs total, so there are 2 unused CPUs per unit (not allocated to any guest). Questions: Can we bring up new BigIP units (LB13 & LB14) using the 2 free CPUs on each i7800 unit? Can these new units join the existing HA pair of LB11 & LB12 and sync configuration to each other? Can the new Units serve their own traffic groups ( as shown below in the diagram ) ? Is there any potential limitation or issue with utilizing the remaining CPUs for this purpose? Will i7800 run out of memory or go low in memory ? Goal: Offload 20% of the traffic (a handful of VIPs) to the newly created LB13 & LB14 units. All LBxx ( 11,12,13 & 14 ) should back each other up and sync configuration across Thank you for any assistanceSolved129Views0likes13CommentsConsolidate: Multi-tenancy HA
Hi all, I've spent the last week or so trawling through documentation and the forums trying to find out, if what I'm thinking about doing is possible. At the moment we have way more physical Viprion chassis than we need, we have recently been able to decomm some 2000 series appliances and we have some VE GTM's We have an opportunity that might allow us to consolidate down and make life a little easier. We have a mix of deployment methodologies in use such as VCMP on a single Viprion, a Viprion HA (active/standby) between 2 DC's and a Viprion HA within one DC. If i attempt to push a huge migration of these various deployments it will never get anywhere. So i am hoping that i can achieve the following. Purchase 4 new appliances (R-series or Velos) use the multi-tenancy functionality on these 4 appliances. 2 appliances deployed in DC 1 and the other 2 in the DC 2. They would have tenants created on them that would support the existing HA deployments. See picture which hopefully depicts what I'm trying to explain, The dashed lines depict a HA relationship, hopefully its obvious that there is a mix of tenants that are in active / standby state across all 4 appliances. I am conscious of considering failure of an appliance and ensuring that the remaining appliances being able to cope with the load etc. The other thing I am trying to be aware of, is that I can see that BIG-IP Next is coming within the next couple of years so ideally do not want to paint myself into a corner with this proposal. disclaimer ** F5 is not my typical wheelhouse, I am familiar with day to day use. I'm just struggling to find good documentation that rules out/in what I'm suggesting. I'm not afraid of doing the reading myself if anyone has a good link to read through, Also I'm open to different suggestions.68Views0likes1CommentCabling 2 LTM's to one router
I am trying to design a setup like the one below and need some assistance on how to make the connections: +----------+ +------+ Router +-----+ | +----------+ | | | | | | | | | +----------+ +-----------+ | LTM01 +-----------+ LTM02 | +----------+ +-----------+ | | | | | | | | | | | +--------------+ | +----+ Linux Server +---+ +--------------+ My questions are: How do I setup the connection to the upstream router? Do I tell the router is an LACP LAG or just two independent connections? Do I do anything special on the LTM's? How do I setup the connection to the internal Linux server? Do I tell the server that its a LACP LAG? If not, how do I keep from getting a split route? Do I do anything special on the LTM's? For what its worth, the Linux box has fail-open NIC's in it and will be connected to a pair of MLAG'ed switches farther down stream (LACP LAG on the Linux end).277Views0likes1CommentF5 Failover
I was wondering if anyone can tell me how the failover of the F5 would work or look like. Also correct me if I am wrong anywhere. I have three ESXi hosts, ESXi1 BIGF501 ESXi2 BIGF502 ESXi3 Standby To my understanding you would NOT want BIGF501 to failover (vmotion) in the event of ESXi1 failure. Why is that? I think it has something to do with the licensing but is another reason? If everything is setup correctly and ESXi1 were to fail and not come up in a timely fashion how would BIGF502 be promoted to the PRIMARY? Would it be similar to Availability Groups where as the SECONDARY would automatically be your PRIMARY and when the OLD PRIMARY Comes back up it is automatically your SECONDARY?336Views0likes1CommentTrying Active-Active-Standby in LAB, 2 devices show as offline.
Hey Gents, This was the plan: BIGIP1-A TG1 BIGIP2-A TG2 BIGIP3-S For both TG1 and TG2 But right now all TGs are only ACTIVE on BIGIP3 and shows as "Initializing / Not Synced". On BIGIP3 I did tmsh list cm device unicast address root@(bigip3)(cfg-sync In Sync)(Active)(/Common)(tmos) list cm device unicast-address cm device bigip1.akm.com { unicast-address { { effective-ip 192.168.20.50 effective-port cap ip 192.168.20.50 port cap } { effective-ip 10.128.1.243 effective-port cap ip 10.128.1.243 port cap } } } cm device bigip2.akm.com { unicast-address { { effective-ip 192.168.20.51 effective-port cap ip 192.168.20.51 port cap } { effective-ip 10.128.1.244 effective-port cap ip 10.128.1.244 port cap } } } cm device bigip3.akm.com { unicast-address none } Since its's all a lab, here are the QKVIEWs on gdrive for all 3 devices. Thanks a ton for the help!368Views0likes2CommentsCannot assign virtual address to a traffic group
This was the plan: BIGIP1-A TG1 BIGIP2-A TG2 BIGIP3-S TG3 And all the TGs should failover to each other when the devices go down. However, on the BIGIP3 I cannot assign the VA to the TG3 and hence when TG3 fails over the Virtual server with it doesn't failover. But the TG3 has a floating self IP which I assigned to it, but on assigning the VA and updating it just reverts back to traffic-group-local. In TG3 cli all I see is the FIP root@(bigip3)(cfg-sync In Sync)(Active)(/Common)(tmos) show cm traffic-group TG3 failover-objects ---------------------------------------------------- CM::Traffic Group ---------------------------------------------------- Object Type Object Path Traffic Group ---------------------------------------------------- self IP /Common/TG3-internal-FIP TG3 root@(bigip3)(cfg-sync In Sync)(Active)(/Common)(tmos) show cm traffic-group TG3 all-properties -------------------------------------------------------------------------------------------- CM::Traffic-Group Name Device Status Next Load Next Active Times Became Last Became Active Load Active Active -------------------------------------------------------------------------------------------- TG3 bigip1.akm.com standby true - 1 2 2018-Jul-21 14:16:55 TG3 bigip2.akm.com standby false - - 0 - TG3 bigip3.akm.com active false 1 - 2 2018-Jul-21 14:21:06 Any ideas to sort this please??270Views0likes0CommentsWhy is an Active-Active configuration not recommended by F5?
I was considering configuring our F5 LTMs in an Active/Active state within Cisco ACI but I read here that this type of configuration is not recommended without having at least one F5 in standby mode. "F5 does not recommend deploying BIP-IP systems in an Active-Active configuration without a standby device in the cluster, due to potential loss of high availability." Why is this? With two F5s in Active/Active mode, they should still fail over to each other if one happens to go down. Would it take longer for one device to fail over to another who is active rather than being truly standby?1.1KViews0likes5CommentsBIG-IQ HA inquiry
Hi, Just want to ask for assistance if below plan of deployment is possible. DC 1 - 1 x BIG-IQ VE DC 2 - 1 x BIG-IQ VE Then both BIG-IQ VE will be in HA setup. Is this possible? The reason why we go to this approach so that when 1 DC goes down we still have BIG-IQ in DC2. Hoping for your advise based on your experience if you already do this approach. Thanks.Solved713Views0likes3CommentsRemote Logging Configuration is DSC
Hi, I'm encountering an issue while configuring the remote logging of a DSC. While I can optionally set the local IP, I cannot define which interface to use for remote logging. When no local IP is configured, the logs are send through the routing table of TMOS. I need to send the logs through the management interface, instead of the traffic interfaces. I can reach my goal when configuring the local IP as the one from the management interface. The poor thing is, that the configuration needs to be synchronized after configuration. When I then synchronize the configuration, the other nodes configuration doesn't have the management IP set, instead there is no local IP configured anymore and the traffic interfaces will be used to send out syslog traffic. Is there any way to configure remote logging in a DSC without synchronizing this part of the configuration or is there a way to change the routing of the syslog-ng to use the management interface as default? I saw very much users modifying the syslog-ng configuration itself, instead of using the builtin configuration. Unfortunately the documentation does only claim to set the local IP to a non-floating selfIP in HA configuration (https://support.f5.com/csp/article/K13080): Note: For BIG-IP systems in a high availability (HA) configuration, the non-floating self IP address is recommended if using a Traffic Management Microkernel (TMM) based IP address. From my understanding and experience this would end in the same issue, because the non-floating selfIP is not synchronized, but the remote logging configuration needs to be synchronized. I'm very thankful for every hint. Greets, svs366Views0likes2Comments