Technical Forum
Ask questions. Discover Answers.
Showing results for 
Search instead for 
Did you mean: 
Custom Alert Banner

2x F5 VE in Azure doesn't ping each other



I'm building a F5 cluster. 2 VMs, 3 interfaces:

  • MGMT,
  • External,
  • Internal (+ HA)

I Azure I have 3 VNets - according to needs

When I wanted to start forming a cluster, I noticed VMs does not ping each other on external nor internal interfaces.
Only MGMT (which currently are opened to connect from Internet - temporarily).

While this is internal subnet traffic, NSGs shouldn't be a problem. Yet - if it doesn't work, we added permit internal to internal on any port/proto. Nothing changed...

I have simple setup as it can be:

  • - internal floating
  • - internal bigip1
  • - internal bigip2


bigip1 selfip.png

bigip1 vlan.png


bigip2 selfip.png

bigip2 vlan.png

bigip1 doesn't ping bigip2. Doesn't resolve ARP. I tried to add static ARP on both sides - didn't help...

[admin@bigip1:Active:Standalone] ~ # ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=0.733 ms
--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms
[admin@bigip1:Active:Standalone] ~ # ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.042 ms
--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
[admin@bigip1:Active:Standalone] ~ # ping
PING ( 56(84) bytes of data.
--- ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

[admin@bigip1:Active:Standalone] ~ # arp -a | grep 10.0.3
? ( at 00:0d:3a:2d:74:82 [ether] on internal
[admin@bigip1:Active:Standalone] ~ #

I even created similar setup in GNS3, which works perfectly, so I assume something is wrong on Azure side.
But after clearing out NSGs I'm confused what can it be...

Using Version



This is definitely an issue on the Azure side, You have to allow communication between internal networks from your Big-IP, in the Restricted Src Address you have to specify the IP addresses of the network – the ones that are allowed to connect to the BIG-IP.

Review the next link to try to find the step that you miss in your azure configuration:

Additional in this link you have all the information:

Hope it´s work.

Hi @shadow_82,

I strongly advise against building a failover cluster in the cloud. Resources in the cloud are far to expensive to use them in standby mode. Failover might take longer than expected (=bad customer experience).

I do recommend to have an active/active setup and utilize GSLB as a service (F5 DNS Load Balancer Cloud Service) to distribute traffic between the two instances. Use Terraform / AS3 (any form of automation) to keep the config on both BIG-IP nodes consistent.

If you insist on using failover - this is how it's done the F5 way:


Thanks for answer!
This is something we are thinking about, maybe not active active due to heavier troubleshooting once something bad occurs (you have to divide traffic going though 2 F5s, not 1).

Yet maybe having only 1 F5 running and second being shut down in cluster - until some upgrade procedure will be needed.

This might be the sweet spot.

You pay for 1 VM running 24/7/365 and spin up the 2nd one for upgrade purposes (to switchover)

Community Manager
Community Manager

@Jeff_Giroux_F5 did a series of lightboards on Azure deployments that might be helpful. Check them out here.