The Hitchhiker’s Guide to BIG-IP in Azure – “High Availability”

Hello and welcome to the third installment of “The Hitchhiker’s Guide to BIG-IP in Azure”. In previous posts, (I assume you read and memorized them… right?), we looked at the Azure infrastructure and the many options one has for deploying a BIG-IP into Azure. Here’s some links to the previous posts in case you missed them; well worth the read. Let us now turn our attention to the next topic in our journey: high-availability.

A key to ensuring high-availability of your Azure-hosted application, (or any application for that matter), is making sure to eliminate any potential single points of failure. To that end, load balancing is typically used as the primary means to ensure a copy of the application is always reachable. This is one of the most common reasons for utilizing a BIG-IP.

Those of us who have deployed the F5 BIG-IP in a traditional data center environment know that ensuring high-availability, (HA), is more than just having multiple pool members behind a single BIG-IP; it’s equally as important to ensure the BIG-IP does not represent a single point of failure. The same holds true for Azure deployments; eliminate single points of failure. While the theory is the same for both on-premises and cloud-based deployments, the process of deploying and configuring for HA is not.

As you might recall from our first installment, due to infrastructure limitations common across public clouds, the traditional method of deploying the BIG-IP in an active/standby pair is not feasible. That’s ok; no need to search the universe.  There’s an answer; and no, it’s not 42.   - Sorry couldn’t help myself 

Active / Active Deployment

“Say, since I have to have at least 2 BIG-IPs for HA, why wouldn’t I want to use both?” Well, for most cases, you probably would want to and can. Since the BIG-IP is basically another virtual machine, we can make use of various native Azure resources, (refer to Figure 1), to provide high availability.

Availability Sets

The BIG-IPs can be, (should be) placed in an availability set.  BIG-IPs are located in separate fault and update domains ensuring local hardware fault tolerance.

Azure Load Balancers

The BIG-IP can be deployed behind and Azure load balancer to provide Active / Active high availability.  It may seem strange to “load balance” a load balancer.  However, it’s important to remember, the BIG-IP provides a variety of application services including WAF, Federation, SSO, SSL Offload, etc.  This is in addition to traffic optimization and comprehensive load balancing.

 

Azure Autoscale

For increased flexibility with respect to performance, capacity, and availability BIG-IPs can be deployed into scale sets, (refer to Figure 2 below).  By combining multiple public facing IP endpoints, interfaces, horizontal and vertical auto scaling it’s possible to efficiently run multiple optimized, secure, and highly available applications.

Note:  Currently, multiple BIG-IP instance deployments, (including scale sets), must  be deployed programmatically, typically via an ARM template.  Here’s the good news; F5 has several ARM templates available on GitHub at https://github.com/F5Networks/f5-azure-arm-templates.

Active / Standby Deployment with Public Endpoint Migration

As I just mentioned, in most cases an active/active deployment is preferred.  However, there may be stateful applications that still require load balancing mechanisms beyond an Azure load balancer’s capability.  Thanks to the guys in product development, there’s an experimental ARM template available on GitHub for deploying a pair of Active/Standby BIG-IPs.  This deployment option mimics F5’s traditional on-premises model, (thanks again Mike Shimkus).

Global High Availability

With data centers literally located all over the world, it’s possible to place your application close to the end user wherever they might be located.  By incorporating BIG-IP DNS, (formerly GTM), applications can be deployed globally for performance as well as availability. 

Users can be directed to appropriate application instance.  In the event an application becomes unavailable or overloaded, users will be automatically redirected to a secondary subscription or region.  This can be implemented down to a specific virtual server.  All other unaffected traffic will still be sent to the desired region.

Well friends, that’s it for this week.  Stay tuned for next week when we take a look at life cycle management.  Or would you prefer some Vogon poetry?

Additional Links:

Published Jun 21, 2017
Version 1.0

Was this article helpful?

5 Comments

  • f51's avatar
    f51
    Icon for Cirrostratus rankCirrostratus

    Hi,

     

    We are using single nic and using my private network in Azure. By default bigip ssl port is 8443 and I changed to 443. My question is can we use same IP address for virtual which we using for mgmt & self_IP ? If yes I am creating virtual on port 88 and pool mems on port 88. But my virtual server not talking to back end servers ?

     

    Any suggestion on that ?

     

  • Hi There,

     

    Yes, you should, use the same IP address for MGMT, the self-IP and the virtual server. On your virtual server make sure you have enabled SNAT, (set as 'Automap'). This is required so that your backend pool members return traffic to the BIG-IP rather than their default route. That is likely your issue. Hope it helps.

     

    Greg

     

  • hello Sir,

     

    we are planning to deploy F5 in azure for one of our client however we are not getting any details / reference to deploy F5 active active instances .please share me any link or documents for same

     

  • Once F5 device is created in Azure, can i create new VIPS to monitor IaaS VM and it savaliabiltiy set. IS this similar to on prem F5 set up. 1. Create nodes 1. create Pool 1. Create Vip Create Monitors ICPM, Http get monitors.