Technical Forum
Ask questions. Discover Answers.
Showing results for 
Search instead for 
Did you mean: 

NFS requests from privileged port (< 1024) issues with port-reuse

I have a forwarding virtual server configured with the recommended settings except for increasing the idle timeout due to NFS mounts. The timeout is set to 3600 seconds.


fastl4 profile:


idle-timeout 3600


loose-close enabled


loose-initialization enabled


reset-on-timeout disabled


Problem we are seeing in due to NFS port-reuse below 1024. If for any reason there is a network error or the Client Servers loose connection with the NAS filer the sympton is the Client waits 5 minutes then sends a SYN. This SYN uses the same source IP and the same source Port as the original connection. The F5 seeing a SYN coming from a device with the SAME IP and the SAME Source Port will DROP the packet. Until the connection is cleared the systems can no longer communicate. The Client keeps sending the SYN ever 10 seconds which resets the idle timeout so the connection will never drop. The only solution is to re-initialize the Client which then will send the SYN on a different Source Port. This does not happen if the Client uses port above 1024 but that is a security risk due to a non-root user gaining access.


Someone mentioned using TCP Keep-Alives? We have not tried this but any thoughts? If i do clear the connection in the connection table the request immediately goes thru and once again a connection is established.




Based on this article, in older versions, you could use the tcp keepalive to detect a dead connection. It sounds like after three keepalive failures, the BIG-IP will a)reset the connection and b) clear the connection table. I'm not sure if this still holds, you would probably want to do some testing.




I know this is an old post but I am facing this issue now.

Could you fix this?

Did you do what was suggested before?

Can you post the virtual server and TCP profile settings?


I did not apply keepalive to this profile because I was reading about how it works and it seems it sends a tcp connection to client and server.

Both are alive so they will answer correctly, but if you think this could work I have no problem to apply it.

The problem seems to trigger when we have a routing issue. Client loses connection to VIP and when it recovers and tries to reuse that connection F5 is not sending packets to the servers.

Client keep sending SYN packets and renewing timeout so never clears connection table.

Deleting connection solves the issue.

This is VS and L4 profile configuration:

show running-config ltm virtual SYNC_NFS_PRE_GCP_JC
ltm virtual SYNC_NFS_PRE_GCP_JC {
description "VIP_ SYNC_NFS_PRE_GCP_JC"
persist {
source_addr {
default yes
profiles {
fastL4 { }
source-address-translation {
type snat
translate-address enabled
translate-port enabled
vs-index 225

show running-config ltm pool POOL_SYNC_NFS_PRE_GCP_JC
members {
NFS_NAS_PRE_Minsait_GCP:any {
session monitor-enabled
state up
monitor gateway_icmp

show running-config ltm profile fastl4 fastL4
ltm profile fastl4 fastL4 {
app-service none
idle-timeout 300
mss-override 0
pva-acceleration full
reassemble-fragments disabled
reset-on-timeout enabled



hello @boneyard any suggestion?

Haven't encountered this exactly, perhaps action on service down can help here, give reject or drop ago. I assume the BIG-IP also see the pool member as down for a while?

Else perhaps a lower idle-timeout on the L4 profile, create a different one and see how that goes.


Just for your information, in case someone else can find this issue, I could make some tests in a lab environment and fix this.

This issue is related to:

In old versions you can fix this by applying "modify sys db tm.dupsynenforce value disable".

In new versions (tested on 15.1.5 and 15.1.8), it works with no need to apply workaround suggested.

Thank you for sharing the solution.