cancel
Showing results for 
Search instead for 
Did you mean: 

2x F5 VE in Azure doesn't ping each other

shadow_82
Nimbostratus
Nimbostratus

Hi!

I'm building a F5 cluster. 2 VMs, 3 interfaces:

  • MGMT,
  • External,
  • Internal (+ HA)

I Azure I have 3 VNets - according to needs

When I wanted to start forming a cluster, I noticed VMs does not ping each other on external nor internal interfaces.
Only MGMT (which currently are opened to connect from Internet - temporarily).

While this is internal subnet traffic, NSGs shouldn't be a problem. Yet - if it doesn't work, we added permit internal to internal on any port/proto. Nothing changed...

I have simple setup as it can be:

  • 10.0.3.10 - internal floating
  • 10.0.3.11 - internal bigip1
  • 10.0.3.12 - internal bigip2

BigIP1

bigip1 selfip.png

bigip1 vlan.png

BigIP2

bigip2 selfip.png

bigip2 vlan.png

bigip1 doesn't ping bigip2. Doesn't resolve ARP. I tried to add static ARP on both sides - didn't help...

[admin@bigip1:Active:Standalone] ~ # ping 10.0.3.10
PING 10.0.3.10 (10.0.3.10) 56(84) bytes of data.
64 bytes from 10.0.3.10: icmp_seq=1 ttl=255 time=0.733 ms
^C
--- 10.0.3.10 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms
[admin@bigip1:Active:Standalone] ~ # ping 10.0.3.11
PING 10.0.3.11 (10.0.3.11) 56(84) bytes of data.
64 bytes from 10.0.3.11: icmp_seq=1 ttl=64 time=0.042 ms
^C
--- 10.0.3.11 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms
[admin@bigip1:Active:Standalone] ~ # ping 10.0.3.12
PING 10.0.3.12 (10.0.3.12) 56(84) bytes of data.
^C
--- 10.0.3.12 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

[admin@bigip1:Active:Standalone] ~ # arp -a | grep 10.0.3
? (10.0.3.10) at 00:0d:3a:2d:74:82 [ether] on internal
[admin@bigip1:Active:Standalone] ~ #

I even created similar setup in GNS3, which works perfectly, so I assume something is wrong on Azure side.
But after clearing out NSGs I'm confused what can it be...

Using Version 16.1.2.1

4 REPLIES 4

Sebastiansierra
Cirrocumulus
Cirrocumulus

Hi,

This is definitely an issue on the Azure side, You have to allow communication between internal networks from your Big-IP, in the Restricted Src Address you have to specify the IP addresses of the network – the ones that are allowed to connect to the BIG-IP.

Review the next link to try to find the step that you miss in your azure configuration:

https://community.f5.com/t5/technical-articles/create-a-big-ip-ha-pair-in-azure/ta-p/282837

Additional in this link you have all the information:

https://github.com/F5Networks/f5-azure-arm-templates/tree/main/supported/failover/same-net/via-api/n...

Hope it´s work.

Hi @shadow_82,

I strongly advise against building a failover cluster in the cloud. Resources in the cloud are far to expensive to use them in standby mode. Failover might take longer than expected (=bad customer experience).

I do recommend to have an active/active setup and utilize GSLB as a service (F5 DNS Load Balancer Cloud Service) to distribute traffic between the two instances. Use Terraform / AS3 (any form of automation) to keep the config on both BIG-IP nodes consistent.

If you insist on using failover - this is how it's done the F5 way: https://clouddocs.f5.com/products/extensions/f5-cloud-failover/latest/

KR
Daniel

Thanks for answer!
This is something we are thinking about, maybe not active active due to heavier troubleshooting once something bad occurs (you have to divide traffic going though 2 F5s, not 1).

Yet maybe having only 1 F5 running and second being shut down in cluster - until some upgrade procedure will be needed.

This might be the sweet spot.

You pay for 1 VM running 24/7/365 and spin up the 2nd one for upgrade purposes (to switchover)

JRahm
Community Manager
Community Manager

@Jeff_Giroux did a series of lightboards on Azure deployments that might be helpful. Check them out here.