For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

dabance's avatar
dabance
Icon for Altocumulus rankAltocumulus
Jun 24, 2019

BIP-IP HA on Azure Cloud

i have been going through some article on implementing BIG-IP (LTM) HA on Azure cloud, however i am stumbled upon contradictory statements where one says Azure loadbalancer is required to achieve BIG-IP HA, where as some other implementation without Azure Loadbalancer. Can someone please clarify which one is correct.

8 Replies

  • Hi Dabance,

     

    F5 provides different Azure deployment designs which can be found here...

     

    https://github.com/F5Networks/f5-azure-arm-templates/tree/master/supported

    The "Autoscale" templates are covering a setup of two or more standalone VEs in a load balances configuration and does not utilize session state replication between those VEs. The load can be distributed via RR-DNS or front ending Azure LBs which are distributing the load between the individual VEs.

     

    The "Failover" templates are covering a traditional Sync-Failover F5 setup including session state replication. The active/passive network integration is either handled by your VEs via Azure API calls (aka. dynamically assign the public IP to the currently active unit) or via front-ending Azure LBs.

     

    Personally I don't use any of the provided templates, since they are not flexible enough (aka. no 2-arm setup available and way too many pre-configured settings). Because of that I usually install two standalone 2-nic VEs from the scratch (aka. MGMT and Production interfaces). Created a LTM Sync-Failover cluster as usual (via Self-IPs of the Production Network) and ended up to deploy a Azure-LB in front of the units to provide network failover (aka. L2 failover/clustering does not work in Azure). In this setup each Virtual Server is simply configured with an /31 network mask (aka. two subsequent IPs for each VS) and each of the VE units is listening to just one of those /31 IPs (via additional Virtual Machine IPs). If VE unit A is currently active, the Azure load balancer will mark IP A as active and IP B as inactive and then forward the traffic via IP A to unit A. If VE unit B is currently active, the Azure load balancer will mark IP A as inactive and B as active and then forward the traffic via IP B to unit B. The outcome of this setup is a fully functional Sync-Failover cluster with fail-over delays of 5-10 seconds....

     

    Cheers, Kai

     

    • Jim_M's avatar
      Jim_M
      Icon for Cirrus rankCirrus

      Hi Kai. You mention

      " In this setup each Virtual Server is simply configured with an /31 network mask (aka. two subsequent IPs for each VS) and each of the VE units is listening to just one of those /31 IPs (via additional Virtual Machine IPs)"

       

      Is there a good document that details best practice about how to do this? Including config of any required load balancer or traffic manager?

      • Hi Jim,

         

        afaik there is no such a guide available from F5.

         

        You have basically to see the Azure based VEs the same way you would create a cluster on On-Prem Environments. Just the missing L2 capabilities of Azure are getting replaced with those /31 bit VS instances (in Azure IP-1 gets assigned to Unit-A and IP-2 gets assigned to Unit-B) and a Azure-LB in front of those IP-Pairs to perform Health-Monitoring which system is active and finally failover if needed.

         

        Once a got it up an running you simply operate a usual and fully featured Active-Passive VE cluster with config and session state sync. There is basically no difference between OnPrem and Azure anymore…

         

        Cheers, Kai

         

         

         

         

         

         

         

         

         

         

    • Enfield303's avatar
      Enfield303
      Icon for Nimbostratus rankNimbostratus

      Hello Kai, can you tell me what health probe you implemented on the Azure LB? I've deployed the F5 template which creates two active/passive F5s behind an Azure LB but as I'm loadbalancing a UDP application (AlwaysOn VPN) I'm unsure what health probe I need to create on the Azure LB.

  • Review the supported tree:

    https://github.com/F5Networks/f5-azure-arm-templates/tree/master/supported

     

    As well as the experimental tree which has a few more options:

    https://github.com/F5Networks/f5-azure-arm-templates/tree/master/experimental

     

    You can also look into terraform examples to build out the components via that tooling method. Here are examples of using other orchestration methods (terraform, ansible).

    https://github.com/f5devcentral/Ansible-Terraform-Cloud-Templates