All I can find for the failover is:
Feb 6 11:45:56 local/pri-4600 notice mcpd[3447]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 11:45:56 local/pri-4600 notice mcpd[3447]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 11:45:56 local/pri-4600 notice mcpd[3447]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 11:45:56 local/pri-4600 notice sod[3440]: 01140029:5: HA pool_memb_down FWSM_Wildcard_Virt1 fails action is failover.
Feb 6 11:45:56 local/pri-4600 notice sod[3440]: 010c0018:5: Standby
It doesn't give the assurance the other 2 pings in the window had/would also fail.
I presume the 3 lines, is because there are 3 pools in the F5 checking the same gateway....the FWSM_Wildcard_Virt pool used by the numerous "FWSM_F5_xxx_Routing" 'Forwarding(IP)' virtual servers that creates the default route for most of the vlans behind the F5. And, the FWSM_Wildcard_Virt2 pool definition for Unit 2.
Even under normal circumstances, ping times are:
100 packets transmitted, 100 received, 0% packet loss, time 99055ms
rtt min/avg/max/mdev = 0.377/1.036/1.990/0.323 ms
Continuing through that day's log has:
Feb 6 11:46:40 local/pri-4600 notice mcpd[3447]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 11:46:40 local/pri-4600 notice mcpd[3447]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 11:46:40 local/pri-4600 notice mcpd[3447]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 11:46:40 local/pri-4600 notice sod[3440]: 01140030:5: HA pool_memb_down FWSM_Wildcard_Virt1 is now responding.
Feb 6 12:26:21 local/pri-4600 notice mcpd[3447]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 12:26:21 local/pri-4600 notice mcpd[3447]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 12:26:21 local/pri-4600 notice mcpd[3447]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 12:26:21 local/pri-4600 notice sod[3440]: 01140029:5: HA pool_memb_down FWSM_Wildcard_Virt1 fails action is failover.
Feb 6 12:26:26 local/pri-4600 notice mcpd[3447]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 12:26:26 local/pri-4600 notice mcpd[3447]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 12:26:26 local/pri-4600 notice mcpd[3447]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 12:26:26 local/pri-4600 notice sod[3440]: 01140030:5: HA pool_memb_down FWSM_Wildcard_Virt1 is now responding.
While on the secondary:
Feb 6 09:37:00 local/sec-4600 notice mcpd[3448]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 09:37:00 local/sec-4600 notice mcpd[3448]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 09:37:00 local/sec-4600 notice mcpd[3448]: 01070638:5: Pool member bcj.bda.cfd.cab:0 monitor status down.
Feb 6 09:37:00 local/sec-4600 notice sod[3453]: 01140029:5: HA pool_memb_down FWSM_Wildcard_Virt2 fails action is failover.
Feb 6 09:37:10 local/sec-4600 notice mcpd[3448]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 09:37:10 local/sec-4600 notice mcpd[3448]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 09:37:10 local/sec-4600 notice mcpd[3448]: 01070727:5: Pool member bcj.bda.cfd.cab:0 monitor status up.
Feb 6 09:37:10 local/sec-4600 notice sod[3453]: 01140030:5: HA pool_memb_down FWSM_Wildcard_Virt2 is now responding.
Feb 6 11:45:56 local/sec-4600 notice sod[3453]: 010c0019:5: Active
And, these are actually the second back and forth....the logs for the earlier pri->sec have rotated off now.
I had switched back from sec to pri early on Feb 4th (before I had checked my messages and found that its a snow day...and later the 5th was also a snow day 😉 The switch back from sec to pri on the 4th took longer than expected, because when I had recreated its gateway failsafe, I had inadvertently left its action as Reboot. Which caused it to reboot over and over again, until I was able to disable it. Which was challenging, because even though I could ssh in well before its boot had finished...I couldn't reconfigure HA until it knew if it was licensed for it. Don't recall if I saw that the last time I had gotten into a reboot loop....though I think it was probably because I hadn't thought to try ssh'ng since sshd is started fairly early in the boot. And, was racing the narrow window between the console login prompt and the failsafe reboot. Its times like these that I think our password is overly complex.... 🙂
I switched back again early on the 7th....so far its stayed.