Forum Discussion

Ryan_Bachman_78's avatar
Ryan_Bachman_78
Icon for Nimbostratus rankNimbostratus
Jan 30, 2012

TCP profile and Idle Timout

Hello -

 

 

Hoping someone can help me identify why the expected RST is not being sent. I have an issue with long running AJAX sessions running on my web servers. We recently switched from LVS to bigip LTMs, and rather than figure out a solution in our application code, wanted to see if the idle timeout on the F5 would help us. In a test environment, I have a tcp profile set to enable reset on timeout and a value of 120 seconds. I start a session for a client, and then browse away and watch the idle time climb. Eventually hits 360 seconds (even though a b client x.x.x.x idle timeout shows 120) where my php code closes the session with a FIN, ACK. Don't know where else to check on my system as to why it won't send a rst when the idle time hits 120 on the session.

 

 

Thanks for all the help.

 

 

 

  • Hi Ryan,

     

     

    I think this might be due to several idle timeouts. Do you have SNAT automap enabled on the virtual server? Can you list the virtual server config and reply with an anonymized copy using 'tmsh list ltm virtual VS_NAME'?

     

     

    Aaron

     

  • just in case you have not yet seen this sol.

     

     

    sol7606: Overview of BIG-IP idle session timeouts

     

    http://support.f5.com/kb/en-us/solutions/public/7000/600/sol7606.html
  • Thanks to both of you for the quick reply. So is it my understanding that I can't enforce timeouts using Auto SNAT? Is there really no way to configure this setting? The lvs I replaced was configured for direct return, so in order to install the F5 in a transparent fashion, we deployed it in a 'router on a stick' fashion. If I switch the VS to use fasthttp profile, I have a solution to my problem, but the business wants to maintain original IPs so I need to inject XFF headers into the requests. All my traffic is HTTPS, so fasthttp is not an option at this time. I am including the config of the VS, but it sounds like I know my answer. Just don't know if there is an easy solution to my problem. Thanks.

     

     

    ltm virtual app.perf. {

     

    destination 172.27.37.80:https

     

    ip-protocol tcp

     

    mask 255.255.255.255

     

    persist {

     

    cookie {

     

    default yes

     

    }

     

    }

     

    pool app_perf_https

     

    profiles {

     

    http_xff { }

     

    perf_ssl {

     

    context clientside

     

    }

     

    serverssl {

     

    context serverside

     

    }

     

    stats { }

     

    tcp_lowtimeout {

     

    context clientside

     

    }

     

    }

     

    rules {

     

    Billing_Reditrect

     

    }

     

    snat automap

     

    }

     

  • Do you need snat automap then?

     

     

    I mean doesnt that setting make the server to only see a range of ip's handled by the F5 instead of see the real client ip?

     

     

    Sure the automap is needed if you have the F5 on a stick without further configuration of your routing but you can rearrange this so the F5 will be in the flow instead by setup 2 network connections from your router to the F5 (call them inside/outside or whatever) and then direct your servers to use the inside ip of your F5 as def gw (or in worst case use VRF in your router :P).

     

     

    Another method is to physically place the F5 inline between the router and the servers, or for that matter replace your router with your F5 all together (the F5 supports both static routing aswell as dynamic with BGP, OSPF, IS-IS, RIP for both IPv4 and IPv6).
  • I agree with your assessment that the one arm configuration might not be the best solution, but at this time it is what I have to work with. I needed something that provided a zero downtime implementation, and I have multiple services on what you would consider the internal subnet, that need to call into the Virtual Servers. How would the F5 handle traffic for requests to a VIP on the external interface, just to turn it around and load balance it back in. There are other factors that led me to decide with a one arm deployment as well. I might re-address the architecture at a later date, now I am just frustrated with getting this LTM to send out RSTs when my connections exceed the idle timeout settings. I like your suggestion, and will start exploring what that is going to take to get done.

     

     

    Reading through the docs, I understand that the 10.2.1 version I am running has an indefinite timeout for automap SNAT connections. I tried to workaround that by setting up a custom SNAT pool and manually setting all timeout values (TCP & IP) to 60 seconds. I have the same setting in my TCP profile. My connections timers are still climbing past 60, and reaching the 360 mark where I force the connection closed on the server side. I also tried an iRule to set the timeout value, and the results were no different. I guess my question would be, is the F5 supposed to be sending RSTs in the configuration? I have followed their documentations, and it reads like it should be, but I haven't seen the expected results.

     

     

    Thanks.
  • my bigip seems sending reset correctly after timeout (10 seconds). do you have packet trace file? is there any suspicious there?

    [root@ve1023:Active] config  b virtual bar list
    virtual bar {
       snat automap
       pool foo
       destination 172.28.19.79:23
       ip protocol 6
       profiles mytcp {}
    }
    [root@ve1023:Active] config  b pool foo list
    pool foo {
       members 200.200.200.101:23 {}
    }
    [root@ve1023:Active] config  b profile mytcp list all|grep -i 'reset\|idle\ timeout'
       reset on timeout enable
       idle timeout 10
    [root@ve1023:Active] config  b self 200.200.200.10 list
    self 200.200.200.10 {
       netmask 255.255.255.0
       vlan internal
       allow default
    }
    
    [root@ve1023:Active] config  tcpdump -nni 0.0 port 23
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on 0.0, link-type EN10MB (Ethernet), capture size 108 bytes
    18:41:28.917155 IP 192.168.206.154.59953 > 172.28.19.79.23: S 1839157534:1839157534(0) win 8192 
    18:41:28.917199 IP 172.28.19.79.23 > 192.168.206.154.59953: S 3471835166:3471835166(0) ack 1839157535 win 3780 
    18:41:28.919066 IP 192.168.206.154.59953 > 172.28.19.79.23: . ack 1 win 260
    18:41:28.919113 IP 200.200.200.10.59953 > 200.200.200.101.23: S 3121687173:3121687173(0) win 4380 
    18:41:28.920109 IP 200.200.200.101.23 > 200.200.200.10.59953: S 1369647150:1369647150(0) ack 3121687174 win 5840 
    18:41:28.920119 IP 200.200.200.10.59953 > 200.200.200.101.23: . ack 1 win 4380
    
    18:41:31.803183 IP 192.168.206.154.59953 > 172.28.19.79.23: P 91:93(2) ack 141 win 260
    18:41:31.803204 IP 200.200.200.10.59953 > 200.200.200.101.23: P 91:93(2) ack 141 win 4520
    18:41:31.803209 IP 172.28.19.79.23 > 192.168.206.154.59953: . ack 93 win 3872
    18:41:31.804131 IP 200.200.200.101.23 > 200.200.200.10.59953: . ack 93 win 46
    18:41:31.804143 IP 200.200.200.101.23 > 200.200.200.10.59953: P 141:143(2) ack 93 win 46
    18:41:31.804149 IP 172.28.19.79.23 > 192.168.206.154.59953: P 141:143(2) ack 93 win 3872
    18:41:31.904491 IP 200.200.200.10.59953 > 200.200.200.101.23: . ack 143 win 4522
    18:41:31.905163 IP 200.200.200.101.23 > 200.200.200.10.59953: P 143:237(94) ack 93 win 46
    18:41:32.005274 IP 200.200.200.10.59953 > 200.200.200.101.23: . ack 237 win 4616
    18:41:32.010343 IP 192.168.206.154.59953 > 172.28.19.79.23: . ack 143 win 260
    18:41:32.010358 IP 172.28.19.79.23 > 192.168.206.154.59953: P 143:237(94) ack 93 win 3872
    (1) 18:41:32.213283 IP 192.168.206.154.59953 > 172.28.19.79.23: . ack 237 win 259
    (2) 18:41:43.870221 IP 200.200.200.10.59953 > 200.200.200.101.23: R 93:93(0) ack 237 win 4616
    (3) 18:41:43.870241 IP 172.28.19.79.23 > 192.168.206.154.59953: R 237:237(0) ack 93 win 3872