Forum Discussion
BIP-IP HA on Azure Cloud
Hi Dabance,
F5 provides different Azure deployment designs which can be found here...
https://github.com/F5Networks/f5-azure-arm-templates/tree/master/supported
The "Autoscale" templates are covering a setup of two or more standalone VEs in a load balances configuration and does not utilize session state replication between those VEs. The load can be distributed via RR-DNS or front ending Azure LBs which are distributing the load between the individual VEs.
The "Failover" templates are covering a traditional Sync-Failover F5 setup including session state replication. The active/passive network integration is either handled by your VEs via Azure API calls (aka. dynamically assign the public IP to the currently active unit) or via front-ending Azure LBs.
Personally I don't use any of the provided templates, since they are not flexible enough (aka. no 2-arm setup available and way too many pre-configured settings). Because of that I usually install two standalone 2-nic VEs from the scratch (aka. MGMT and Production interfaces). Created a LTM Sync-Failover cluster as usual (via Self-IPs of the Production Network) and ended up to deploy a Azure-LB in front of the units to provide network failover (aka. L2 failover/clustering does not work in Azure). In this setup each Virtual Server is simply configured with an /31 network mask (aka. two subsequent IPs for each VS) and each of the VE units is listening to just one of those /31 IPs (via additional Virtual Machine IPs). If VE unit A is currently active, the Azure load balancer will mark IP A as active and IP B as inactive and then forward the traffic via IP A to unit A. If VE unit B is currently active, the Azure load balancer will mark IP A as inactive and B as active and then forward the traffic via IP B to unit B. The outcome of this setup is a fully functional Sync-Failover cluster with fail-over delays of 5-10 seconds....
Cheers, Kai
- Jim_MJul 01, 2020Cirrus
Hi Kai. You mention
" In this setup each Virtual Server is simply configured with an /31 network mask (aka. two subsequent IPs for each VS) and each of the VE units is listening to just one of those /31 IPs (via additional Virtual Machine IPs)"
Is there a good document that details best practice about how to do this? Including config of any required load balancer or traffic manager?
- Kai_WilkeJul 01, 2020MVP
Hi Jim,
afaik there is no such a guide available from F5.
You have basically to see the Azure based VEs the same way you would create a cluster on On-Prem Environments. Just the missing L2 capabilities of Azure are getting replaced with those /31 bit VS instances (in Azure IP-1 gets assigned to Unit-A and IP-2 gets assigned to Unit-B) and a Azure-LB in front of those IP-Pairs to perform Health-Monitoring which system is active and finally failover if needed.
Once a got it up an running you simply operate a usual and fully featured Active-Passive VE cluster with config and session state sync. There is basically no difference between OnPrem and Azure anymore…
Cheers, Kai
- Jeff_Giroux_F5Jul 01, 2020Ret. Employee
Review my articles please that show various HA patterns as well as what the BIG-IP virtual server should look like.
https://devcentral.f5.com/s/articles/Lightboard-Lessons-BIG-IP-Deployments-in-Azure-Cloud
And overall HA guidance in the 3 major cloud providers with recommendations:
https://devcentral.f5.com/s/articles/F5-High-Availability-Public-Cloud-Guidance
- Kai_WilkeJul 01, 2020MVP
Hi Jeff,
I developed the mentioned – lets call it Azure LB assisted /31 subnet mask Sync-Failover cluster deployment – way back in 2017/2018 when F5 had only API based Sync-Failover support with all the drawbacks the Azure-API is able to cause.
The mayor difference to your outlined “HA Using ALB for Failover” approach is, that my scenario is able to use a 2-NICs template (just Production-NIC and Management-NIC) and uses multiple /31 subnet mask virtual servers with the same :80 and:443 port assignments instead of multiple /0 subnet mask virtual servers with different ports for individual applications/services.
Note: Since then I’ve checked here and there the new releases of the supported ARM-Templates and also had a couple talks with the guys behind. But to be honest those wizard based configurations are pretty much non-intuitive and are producing a rather unfamiliar/unclean configuration set (at least for my heavy OCD impaired brain). I will definitely continue to stick to an as clean as possible 2-NIC standalone initial VE deployment (/w all autogenerated settings removed) and then just create a Sync-Failover cluster setup like I would setup for OnPrem environments. Compared to a well-kown OnPrem deployment the one and only difference are those /31 subnet mask virtual servers (or /0 with different ports as you like) and the front-ending Azure-LB to spice up the limited Layer2 capabilities of Azure. But that is all about...
Cheers, Kai
- Enfield303Nov 23, 2020Nimbostratus
Hello Kai, can you tell me what health probe you implemented on the Azure LB? I've deployed the F5 template which creates two active/passive F5s behind an Azure LB but as I'm loadbalancing a UDP application (AlwaysOn VPN) I'm unsure what health probe I need to create on the Azure LB.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com