Hi Dabance,
F5 provides different Azure deployment designs which can be found here...
https://github.com/F5Networks/f5-azure-arm-templates/tree/master/supported
The "Autoscale" templates are covering a setup of two or more standalone VEs in a load balances configuration and does not utilize session state replication between those VEs. The load can be distributed via RR-DNS or front ending Azure LBs which are distributing the load between the individual VEs.
The "Failover" templates are covering a traditional Sync-Failover F5 setup including session state replication. The active/passive network integration is either handled by your VEs via Azure API calls (aka. dynamically assign the public IP to the currently active unit) or via front-ending Azure LBs.
Personally I don't use any of the provided templates, since they are not flexible enough (aka. no 2-arm setup available and way too many pre-configured settings). Because of that I usually install two standalone 2-nic VEs from the scratch (aka. MGMT and Production interfaces). Created a LTM Sync-Failover cluster as usual (via Self-IPs of the Production Network) and ended up to deploy a Azure-LB in front of the units to provide network failover (aka. L2 failover/clustering does not work in Azure). In this setup each Virtual Server is simply configured with an /31 network mask (aka. two subsequent IPs for each VS) and each of the VE units is listening to just one of those /31 IPs (via additional Virtual Machine IPs). If VE unit A is currently active, the Azure load balancer will mark IP A as active and IP B as inactive and then forward the traffic via IP A to unit A. If VE unit B is currently active, the Azure load balancer will mark IP A as inactive and B as active and then forward the traffic via IP B to unit B. The outcome of this setup is a fully functional Sync-Failover cluster with fail-over delays of 5-10 seconds....
Cheers, Kai