Forum Discussion

newf5learner_13's avatar
newf5learner_13
Icon for Nimbostratus rankNimbostratus
Nov 20, 2015

connection resets happening to web services servers when clients uses load balanced url. Need help to capture traffic.

Hi,

I'm seeing complaints saying that there are intermittent resets happening when the client uses load balanced url. Out of 60 web request sent from clients in 30 minutes, 8 to 9 requests were getting dropped intermittently. And I'm asked to identify the issue with resets and root cause.

Here is the configuration of the VIP. can someone suggest me the best way to capture the traffic when the resets happen. Bear in mind, I cannot run tcpdump for all the 30 minutes as it could generate a huge log file or pcap file. Please suggest me the appropriate procedure or tcpdump commands that I can use to capture the traffic when resets happen and I can give them the reason behind the that resets - can use ringdump process (I don't know how to do it though). Please suggest. thanks.

ltm virtual vs_gpdef_app_amvescap_ha {
    destination 10.196.1.15:http
    ip-protocol tcp
    mask 255.255.255.255
    persist {
        simple-18000 {
            default yes
        }
    }
    pool pool_GPDEF_app_amvescap_ha
    profiles {
        tcp-gccp { }
    }
    source 0.0.0.0/0
    source-address-translation {
        type automap
    }
    vs-index 66
}


ltm pool pool_GPDEF_app_amvescap_ha {
    members {
        10.194.232.127:http {
            address 10.194.232.127
            session monitor-enabled
            state up
        }
        10.194.232.134:http {
            address 10.194.232.134
            session monitor-enabled
            state up
        }
    }
    monitor http
}

ltm profile tcp tcp-gccp {
    abc enabled
    ack-on-push disabled
    app-service none
    close-wait-timeout 5
    cmetrics-cache enabled
    congestion-control high-speed
    defaults-from tcp
    deferred-accept disabled
    delayed-acks enabled
    dsack disabled
    ecn disabled
    fin-wait-timeout 5
    idle-timeout 2000
    ip-tos-to-client 0
    keep-alive-interval 1800
    limited-transmit enabled
    link-qos-to-client 0
    max-retrans 8
    md5-signature disabled
    md5-signature-passphrase none
    nagle enabled
    pkt-loss-ignore-burst 0
    pkt-loss-ignore-rate 0
    proxy-buffer-high 16384
    proxy-buffer-low 4096
    proxy-mss disabled
    proxy-options disabled
    receive-window-size 32768
    reset-on-timeout enabled
    selective-acks enabled
    send-buffer-size 32768
    slow-start enabled
    syn-max-retrans 3
    time-wait-recycle enabled
    time-wait-timeout 2000
    timestamps enabled
    verified-accept disabled
    zero-window-timeout 20000
}
  • nathe's avatar
    nathe
    Icon for Cirrocumulus rankCirrocumulus

    You can get the BIG-IP to log why it's sent RSTs and that might help. See SOL13223: Configuring the BIG-IP system to log TCP RST packets

    As another option this tcpdump command will only look for RST packets - this should show where the rst is coming from.

    To view all packets that are traveling through the BIG-IP system that contain the RST flag, type the following command:
    tcpdump 'tcp[tcpflags] & (tcp-rst) != 0'
    

    Hope this helps,

    N

  • thanks. other than the command to capture the resets (the above command is not working - I tired multiple times on a busy LTM), can you please suggest how to capture traffic using some compound statements using source, destination servers and on port 80.

    I have even tried this

    tcpdump '( tcp[tcpflags] & (tcp-rst) != 0) and host 10.196.1.119'     but I'm seeing no traffic even though I reset the connection from browser. 
    
    tcpdump -ni 0.0 src host 10.194.232.2 and '(dst host 10.194.232.127 or dst host 10.194.232.134)' 
    

    10.194.232.2 is the Internal self IP address which is interacting with servers. however its capturing the health monitor traffic also. Please suggest me how to filter out only my active client connection related traffic but not the health monitor one.

    Please suggest. thanks.