high availability
30 TopicsQuestions about F5 BIG-IP Multi-Datacenter Configuration
We have an infrastructure with two datacenters (DC1 and DC2), each equipped with an F5 BIG-IP using the LTM module for DNS traffic load balancing to resolvers, and the Routing module to inject BGP routes to the Internet Gateways (IGW) for redundancy. Here’s our current setup (based on the attached diagram): Each DC has a BIG-IP connected to resolvers via virtual interfaces (VPI1 and VPI2). Routing tables indicate VPI1->DC1 and VPI2->DC2. Each DC has its own IGW for Internet connectivity. Question 1: Handling BIG-IP Failures If the BIG-IP in one datacenter (e.g., DC1) fails, will the DNS traffic destined for its resolvers be automatically redirected to DC2 via BGP? How can BGP be configured to ensure this? Is it feasible and recommended to create a HA Group including the BIG-IPs from both datacenters for automatic failover? What are the limitations or best practices for such a setup across remote sites? Question 2: IGW Redundancy Currently, each datacenter has its own IGW. We’d like to implement redundancy between the IGWs of the two DCs. Can a protocol like HSRP or VRRP be used to share a virtual IP address between the IGWs of the two datacenters? If so, how can the geographical distance be managed? If not, what are the alternatives to ensure effective IGW redundancy in a multi-datacenter environment? Question 3: BGP Optimization and Latency We use BGP to redirect traffic to the available datacenter in case of resolver failures. How can BGP be configured to minimize latency during this redirection? Are there specific techniques or configurations recommended by F5 to optimize this? Question 4: Alternatives to the DNS Module for Redundancy We are considering a solution like the DNS module (GSLB) to intelligently manage DNS traffic redirection between datacenters in case of failures. However, this could increase costs. Are there alternatives to the DNS module that would achieve this goal (intelligent redirection and inter-datacenter redundancy) while leveraging the existing LTM and Routing modules? For example, advanced BGP configurations or other built-in features of these modules? Thank you in advance for your advice and feedback!57Views0likes1CommentAdding New BigIP Units Using Free CPUs on i7800 (LB13 & LB14) to an existing HA Pair
Hello, We have a pair of i7800 devices, each running two guests with the following configuration: i7800 Unit 1: LB09 & LB11 i7800 Unit 2: LB10 & LB12 Current Setup: LB09 & LB10: These are part of a High Availability (HA) pair, running in an Active/Active configuration. LB11 & LB12: Similarly, these two guests are also in an HA pair, running in Active/Active mode. Each LBx unit is utilizing 6 CPUs (which is the maximum allowed per guest). The i7800 comes with 14 CPUs total, so there are 2 unused CPUs per unit (not allocated to any guest). Questions: Can we bring up new BigIP units (LB13 & LB14) using the 2 free CPUs on each i7800 unit? Can these new units join the existing HA pair of LB11 & LB12 and sync configuration to each other? Can the new Units serve their own traffic groups ( as shown below in the diagram ) ? Is there any potential limitation or issue with utilizing the remaining CPUs for this purpose? Will i7800 run out of memory or go low in memory ? Goal: Offload 20% of the traffic (a handful of VIPs) to the newly created LB13 & LB14 units. All LBxx ( 11,12,13 & 14 ) should back each other up and sync configuration across Thank you for any assistanceSolved200Views0likes13CommentsConsolidate: Multi-tenancy HA
Hi all, I've spent the last week or so trawling through documentation and the forums trying to find out, if what I'm thinking about doing is possible. At the moment we have way more physical Viprion chassis than we need, we have recently been able to decomm some 2000 series appliances and we have some VE GTM's We have an opportunity that might allow us to consolidate down and make life a little easier. We have a mix of deployment methodologies in use such as VCMP on a single Viprion, a Viprion HA (active/standby) between 2 DC's and a Viprion HA within one DC. If i attempt to push a huge migration of these various deployments it will never get anywhere. So i am hoping that i can achieve the following. Purchase 4 new appliances (R-series or Velos) use the multi-tenancy functionality on these 4 appliances. 2 appliances deployed in DC 1 and the other 2 in the DC 2. They would have tenants created on them that would support the existing HA deployments. See picture which hopefully depicts what I'm trying to explain, The dashed lines depict a HA relationship, hopefully its obvious that there is a mix of tenants that are in active / standby state across all 4 appliances. I am conscious of considering failure of an appliance and ensuring that the remaining appliances being able to cope with the load etc. The other thing I am trying to be aware of, is that I can see that BIG-IP Next is coming within the next couple of years so ideally do not want to paint myself into a corner with this proposal. disclaimer ** F5 is not my typical wheelhouse, I am familiar with day to day use. I'm just struggling to find good documentation that rules out/in what I'm suggesting. I'm not afraid of doing the reading myself if anyone has a good link to read through, Also I'm open to different suggestions.80Views0likes1CommentCabling 2 LTM's to one router
I am trying to design a setup like the one below and need some assistance on how to make the connections: +----------+ +------+ Router +-----+ | +----------+ | | | | | | | | | +----------+ +-----------+ | LTM01 +-----------+ LTM02 | +----------+ +-----------+ | | | | | | | | | | | +--------------+ | +----+ Linux Server +---+ +--------------+ My questions are: How do I setup the connection to the upstream router? Do I tell the router is an LACP LAG or just two independent connections? Do I do anything special on the LTM's? How do I setup the connection to the internal Linux server? Do I tell the server that its a LACP LAG? If not, how do I keep from getting a split route? Do I do anything special on the LTM's? For what its worth, the Linux box has fail-open NIC's in it and will be connected to a pair of MLAG'ed switches farther down stream (LACP LAG on the Linux end).280Views0likes1CommentF5 Failover
I was wondering if anyone can tell me how the failover of the F5 would work or look like. Also correct me if I am wrong anywhere. I have three ESXi hosts, ESXi1 BIGF501 ESXi2 BIGF502 ESXi3 Standby To my understanding you would NOT want BIGF501 to failover (vmotion) in the event of ESXi1 failure. Why is that? I think it has something to do with the licensing but is another reason? If everything is setup correctly and ESXi1 were to fail and not come up in a timely fashion how would BIGF502 be promoted to the PRIMARY? Would it be similar to Availability Groups where as the SECONDARY would automatically be your PRIMARY and when the OLD PRIMARY Comes back up it is automatically your SECONDARY?342Views0likes1CommentTrying Active-Active-Standby in LAB, 2 devices show as offline.
Hey Gents, This was the plan: BIGIP1-A TG1 BIGIP2-A TG2 BIGIP3-S For both TG1 and TG2 But right now all TGs are only ACTIVE on BIGIP3 and shows as "Initializing / Not Synced". On BIGIP3 I did tmsh list cm device unicast address root@(bigip3)(cfg-sync In Sync)(Active)(/Common)(tmos) list cm device unicast-address cm device bigip1.akm.com { unicast-address { { effective-ip 192.168.20.50 effective-port cap ip 192.168.20.50 port cap } { effective-ip 10.128.1.243 effective-port cap ip 10.128.1.243 port cap } } } cm device bigip2.akm.com { unicast-address { { effective-ip 192.168.20.51 effective-port cap ip 192.168.20.51 port cap } { effective-ip 10.128.1.244 effective-port cap ip 10.128.1.244 port cap } } } cm device bigip3.akm.com { unicast-address none } Since its's all a lab, here are the QKVIEWs on gdrive for all 3 devices. Thanks a ton for the help!378Views0likes2CommentsCannot assign virtual address to a traffic group
This was the plan: BIGIP1-A TG1 BIGIP2-A TG2 BIGIP3-S TG3 And all the TGs should failover to each other when the devices go down. However, on the BIGIP3 I cannot assign the VA to the TG3 and hence when TG3 fails over the Virtual server with it doesn't failover. But the TG3 has a floating self IP which I assigned to it, but on assigning the VA and updating it just reverts back to traffic-group-local. In TG3 cli all I see is the FIP root@(bigip3)(cfg-sync In Sync)(Active)(/Common)(tmos) show cm traffic-group TG3 failover-objects ---------------------------------------------------- CM::Traffic Group ---------------------------------------------------- Object Type Object Path Traffic Group ---------------------------------------------------- self IP /Common/TG3-internal-FIP TG3 root@(bigip3)(cfg-sync In Sync)(Active)(/Common)(tmos) show cm traffic-group TG3 all-properties -------------------------------------------------------------------------------------------- CM::Traffic-Group Name Device Status Next Load Next Active Times Became Last Became Active Load Active Active -------------------------------------------------------------------------------------------- TG3 bigip1.akm.com standby true - 1 2 2018-Jul-21 14:16:55 TG3 bigip2.akm.com standby false - - 0 - TG3 bigip3.akm.com active false 1 - 2 2018-Jul-21 14:21:06 Any ideas to sort this please??273Views0likes0CommentsWhy is an Active-Active configuration not recommended by F5?
I was considering configuring our F5 LTMs in an Active/Active state within Cisco ACI but I read here that this type of configuration is not recommended without having at least one F5 in standby mode. "F5 does not recommend deploying BIP-IP systems in an Active-Active configuration without a standby device in the cluster, due to potential loss of high availability." Why is this? With two F5s in Active/Active mode, they should still fail over to each other if one happens to go down. Would it take longer for one device to fail over to another who is active rather than being truly standby?1.2KViews0likes5CommentsBIG-IQ HA inquiry
Hi, Just want to ask for assistance if below plan of deployment is possible. DC 1 - 1 x BIG-IQ VE DC 2 - 1 x BIG-IQ VE Then both BIG-IQ VE will be in HA setup. Is this possible? The reason why we go to this approach so that when 1 DC goes down we still have BIG-IQ in DC2. Hoping for your advise based on your experience if you already do this approach. Thanks.Solved717Views0likes3Comments