Active/Active load balancing examples with F5 BIG-IP and Azure load balancer

Multiple standalone devices, or cluster? Let's discuss what is possible and easily supportable in public cloud.

Background

A couple years ago I wrote an article about some practical considerations using Azure Load Balancer. Over time it's been used by customers, so I thought to add a further article that specifically discusses Active/Active load balancing options.

I'll use Azure's standard load balancer as an example, but you can apply this to other cloud providers. In fact, the customer I helped most recently with this very question was running in Google Cloud.

This article focuses on using standard TCP load balancers in the cloud.

Why Active/Active?

Most customers run 2x BIG-IP's in an Active/Standby cluster on-premises, and it's extremely common to do the same in public cloud. Since simplicity and supportability are key to successful migration projects, often it's best to stick with architectures you know and can support.

However, if you are confident in your cloud engineering skills or if you want more than 2x BIG-IP's processing traffic, you may consider running them all Active. Of course, if your total throughput for N number of BIG-IP's exceeds the throughput that N-1 can support, the loss of a single VM will leave you with more traffic than the remaining device(s) can handle. I recommend choosing Active/Active only if you're confident in your purpose and skillset.

Let's define Active/Active

Sometimes this term is used with ambiguity. I'll cover three approaches using Azure load balancer, each slightly different:

  • multiple standalone devices
  • Sync-Only group using Traffic Group None
  • Sync-Failover group using Traffic Group None

Each of these will use a standard TCP cloud load balancer. This article does not cover other ways to run multiple Active devices, which I've outlined at the end for completeness. 

Multiple standalone appliances

This is a straightforward approach and an ideal target for cloud architectures. When multiple devices each receive and process traffic independently, the overhead work of disaggregating traffic to spread between the devices can be done by other solutions, like a cloud load balancer. (Other out-of-scope solutions could be ECMP, BGP, DNS load balancing, or gateway load balancers). Scaling out horizontally can be a matter of simple automation and there is no cluster configuration to maintain. The only limit to the number of BIG-IP's will be any limits of the cloud load balancer.

2x standalone BIG-IP's that are unaware of each other and independently configured.

The main disadvantage to this approach is the fear of misconfiguration by human operators. Often a customer is not confident that they can configure two separate devices consistently over time. This is why automation for configuration management is ideal. In the real world, it's also a reason customers consider our next approach.

Clustering with a sync-only group

A Sync-Only device group allows us to sync some configuration data between devices, but not fail over configuration objects in floating traffic groups between devices, as we would in a Sync-Failover group. With this approach, we can sync traffic objects between devices, assign them to Traffic Group None, and both devices will be considered Active. Both devices will process traffic, but changes only need to be made to a single device in the group.

A Sync-Only device group will sync a folder or partition between devices.

In the example pictured above:

  • The 2x BIG-IP devices are in a Sync-Only group called syncGroup
  • /Common partition is not synced between devices
  • /app1 partition is synced between devices
    • the /app1 partition has Traffic Group None selected
    • the /app1 partition has the Sync-Only group syncGroup selected
  • Both devices are Active and will process traffic received on Traffic Group None
Screenshot when configuring a new partition to be synced between group members

The disadvantage to this approach is that you can create an invalid configuration by referring to objects that are not synced. For example, if Nodes are created in /Common, they will exist on the device on which they were created, but not on other devices. If a Pool in /app1 then references Nodes from /Common, the resulting configuration will be invalid for devices that do not have these Nodes configured. 

Another consideration is that an operator must use and understand partitions. These are simple and should be embraced. However, not all customers understand the use of partitions and many prefer to use /Common only, if possible.

The big advantage here is that changes only need to be made on a single device, and they will be replicated to other devices (up to 32 devices in a Sync-Only group). The risk of inconsistent configuration due to human error is reduced. Each device has a small green "Active" icon in the top left hand of the console, reminding operators that each device is Active and will process incoming traffic on Traffic Group None.

Failover clustering using Traffic Group None

Our third approach is very similar to our second approach. However, instead of a Sync-Only group, we will use a Sync-Failover group. A Sync-Failover group will sync all traffic objects in the default /Common partition, allowing us to keep all traffic objects in the default partition and avoid the use of additional partitions. This creates a traditional Active/Standby pair for a failover traffic group, and a Standby device will not respond to data plane traffic. So how do we make this Active/Active?

When we create our VIPs in Traffic Group None, all devices will process traffic received on these Virtual Servers. One device will show "Active" and the other "Standby" in their console, but this is only the status for the floating traffic group. We don't need to use the floating traffic group, and by using Traffic Group None we have an Active/Active configuration in terms of traffic flow.

 

Active/Standby cluster where Virtual Servers are created in Traffic Group None, thereby allowing both devices to process traffic

The advantage here is similar to the previous example: human operators only need to configure objects in a single device, and all changes are synced between device group members (up to 8 in a Sync-Failover group). Another advantage is that you can use the /Common partition, which was not possible with the previous example.

The main disadvantage here is that the console will show the word "Active" and "Standby" on devices, and this can confuse an operator that is familiar only with Active/Standby clusters using traffic groups for failover. While this third approach is a very legitimate approach and technically sound, it's worth considering if your daily operations and support teams have the knowledge to support this. 

Other considerations

Source NAT (SNAT)

It is almost always a requirement that you SNAT traffic when using Active/Active architecture, and this especially applies to the public cloud, where our options for other networking tricks are limited. If you have a requirement to see true source IP and need to use multiple devices in Active/Active fashion, consider using Azure or AWS Gateway Load Balancer options. Alternative solutions like NGINX and F5 Distributed Cloud may also be worth considering in high-value, hard-requirement situations.

Alternatives to a cloud load balancer

This article is not referring to F5 with Azure Gateway Load Balancer, or to F5 with AWS Gateway Load Balancer. Those gateway load balancer solutions are another way for customers to run appliances as multiple standalone devices in the cloud. However, they typically require routing, not proxying the traffic (ie, they don't allow destination NAT, which many customers intend with BIG-IP).

This article is also not referring to other ways you might achieve Active/Active architectures, such as DNS-based high availability, or using routing protocols, like BGP or ECMP. 

Note that using multiple traffic groups to achieve Active/Active BIG-IP's - the traditional approach on-prem or in private cloud - is not practical in public cloud, as briefly outlined below.

Failover of traffic groups with Cloud Failover Extension (CFE)

One option for Active/Standby high availability of BIG-IP is to use the CFE , which can programmatically update IP addresses and routes in Azure at time of device failure. Since CFE does not support Active/Active scenarios, it is appropriate only for failover of a single traffic group (ie., Active/Standby). 

Conclusion

Thanks for reading! In general I see that Active/Standby solutions work for many customers, but if you are confident in your skills and have a need for Active/Active F5 BIG-IP devices in the cloud, please reach out if you'd like me to walk you through these options and explore any other possibilities. 

Related articles

Practical Considerations using F5 BIG-IP and Azure Load Balancer

Deploying F5 BIG-IP with Azure Cross-Region Load Balancer

Updated Mar 11, 2024
Version 10.0
  • awan_m's avatar
    awan_m
    Icon for Cirrostratus rankCirrostratus

    Hi - i am trying to configure a Sync only device group  F5 deployment in GCP - i have 2 single NIC f5s .

    to setup device trust i need to have a NON Management ip address . can you help me understand how you set it up on your environment.

    did you add a second ip to interval vlan on both devices and allowed communication on heartbeat ports between the 2 devices on that port ?

    secondly how did you configure virtual servers - was it the same ip address on both devices ?

    thanks 

    • MichaelOLeary's avatar
      MichaelOLeary
      Icon for Employee rankEmployee

      Hi awan_m 

      Yes, add a second network interface for the non-management traffic. Regarding the IP address, usually I have a different IP address on each device.

      Please shoot me a message over this website if you want more help and we can email directly, thanks!!

      Mike