Forum Discussion
Sly_85819
Nimbostratus
Dec 28, 2009inet port exhaustion - urgent help needed
We recently had two outages which involved single system sending lot of DNS queries to LTM causing it to slow down and ultimately resulting in performance degradation for all the apps configured on LTM. F5 support suggested that the ephemeral ports were full and we should configure additional self ip to mitigate the situation. A single host on the network causing LTM to slow down is serious cause of concern. I would like to know if there are any ways to proactively take care of this situation. We have configured SNMP traps which helped in getting notification and reduce the outage time when it happened second time.
Here is the messages that we received - 01010201:2: Inet port exhaustion on 10.1.10.61 to 172.24.8.103:53 (proto 17)
10.10.1.61 is the host sending DNS requests. 172.24.8.103 is the pool member of DNS VS. DNS VS is 172.24.4.252. The name server VS is "standard" VS which I believe I need to configure it as "Perf L4" to forward traffic directly instead of doing full proxy. The message is however confusing as the client is hitting server directly??? We have one more VS which allows direct access to the servers behind LTM using a VS - Forwarding (IP). I believe forwarding IP forward traffic directly using route table. I am wondering how the ephemeral ports gets utilized? Is the message actually for the VS?
Thanks in advance.
- L4L7_53191
Nimbostratus
It sounds like you're using SNAT auto map on this virtual server. If you are, that's almost positively your problem. I've run into this exact scenario before, with aggressive DNS traffic causing ephemeral port exhaustion. Fortunately, the fix is relatively easy: use a snat pool with multiple addresses in it. This will do a few things: - Sly_85819
Nimbostratus
We are not using SNAT for the concerned VIP's. The logged message shows connection directly to the pool member. I am still trying to understand whether it was sending traffic to the VIP or the pool member. Below is the config. The iRule basically allows the servers behind LTM to talk to other VIP's on the same LTM. The inbound_11_route VS allows connection directly to the pool members. - L4L7_53191
Nimbostratus
It looks like you actually may be using SNAT automap, according to your confirg. The following virtual server points to the iRule above that issues a SNAT automap address based on a class match on all_server_nodes: - Jessed12345
Employee
You may have already checked your timeouts, but if not you may want to consider the connection timeout in the profile assigned to that virtual. The default timeout for TCP is 300 seconds, for UDP it's 60 seconds, both of which are an eternity for DNS. In the past I've used timeouts of 5-10 seconds for DNS traffic. - Sly_85819
Nimbostratus
Matt, - Jessed12345
Employee
- Sly_85819
Nimbostratus
Got it. Is there order by which the profile with similar settings gets executed? I read something about timeout on protocol profile and source address persistence.
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects