26-Aug-2018 19:24
I have a forwarding virtual server configured with the recommended settings except for increasing the idle timeout due to NFS mounts. The timeout is set to 3600 seconds.
fastl4 profile:
idle-timeout 3600
loose-close enabled
loose-initialization enabled
reset-on-timeout disabled
Problem we are seeing in due to NFS port-reuse below 1024. If for any reason there is a network error or the Client Servers loose connection with the NAS filer the sympton is the Client waits 5 minutes then sends a SYN. This SYN uses the same source IP and the same source Port as the original connection. The F5 seeing a SYN coming from a device with the SAME IP and the SAME Source Port will DROP the packet. Until the connection is cleared the systems can no longer communicate. The Client keeps sending the SYN ever 10 seconds which resets the idle timeout so the connection will never drop. The only solution is to re-initialize the Client which then will send the SYN on a different Source Port. This does not happen if the Client uses port above 1024 but that is a security risk due to a non-root user gaining access.
Someone mentioned using TCP Keep-Alives? We have not tried this but any thoughts? If i do clear the connection in the connection table the request immediately goes thru and once again a connection is established.
27-Aug-2018 00:43
Based on this article, in older versions, you could use the tcp keepalive to detect a dead connection. It sounds like after three keepalive failures, the BIG-IP will a)reset the connection and b) clear the connection table. I'm not sure if this still holds, you would probably want to do some testing.
 
01-Jul-2023 05:47
Did you do what was suggested before?
Can you post the virtual server and TCP profile settings?
03-Jul-2023 00:27
Hello,
I did not apply keepalive to this profile because I was reading about how it works and it seems it sends a tcp connection to client and server.
Both are alive so they will answer correctly, but if you think this could work I have no problem to apply it.
The problem seems to trigger when we have a routing issue. Client loses connection to VIP and when it recovers and tries to reuse that connection F5 is not sending packets to the servers.
Client keep sending SYN packets and renewing timeout so never clears connection table.
Deleting connection solves the issue.
This is VS and L4 profile configuration:
show running-config ltm virtual SYNC_NFS_PRE_GCP_JC
ltm virtual SYNC_NFS_PRE_GCP_JC {
description "VIP_ SYNC_NFS_PRE_GCP_JC"
destination 10.158.19.36:any
mask 255.255.255.255
persist {
source_addr {
default yes
}
}
pool POOL_SYNC_NFS_PRE_GCP_JC
profiles {
fastL4 { }
}
source 0.0.0.0/0
source-address-translation {
pool POOL_SNAT_NFS_JC_PRE
type snat
}
translate-address enabled
translate-port enabled
vs-index 225
}
show running-config ltm pool POOL_SYNC_NFS_PRE_GCP_JC
ltm pool POOL_SYNC_NFS_PRE_GCP_JC {
members {
NFS_NAS_PRE_Minsait_GCP:any {
address 10.64.201.145
session monitor-enabled
state up
}
}
monitor gateway_icmp
}
show running-config ltm profile fastl4 fastL4
ltm profile fastl4 fastL4 {
app-service none
idle-timeout 300
mss-override 0
pva-acceleration full
reassemble-fragments disabled
reset-on-timeout enabled
}
04-Jul-2023 10:58
Haven't encountered this exactly, perhaps action on service down can help here, give reject or drop ago. I assume the BIG-IP also see the pool member as down for a while?
https://my.f5.com/manage/s/article/K15095
Else perhaps a lower idle-timeout on the L4 profile, create a different one and see how that goes.
07-Jul-2023 04:08
Just for your information, in case someone else can find this issue, I could make some tests in a lab environment and fix this.
This issue is related to:
https://my.f5.com/manage/s/article/K24375819
https://cdn.f5.com/product/bugtracker/ID742078.html
In old versions you can fix this by applying "modify sys db tm.dupsynenforce value disable".
In new versions (tested on 15.1.5 and 15.1.8), it works with no need to apply workaround suggested.
09-Jul-2023 03:00
Thank you for sharing the solution.