Enforcing a Single Connection Max to Pool Members

I like finding jewels and nuggets of clarity in problems presented to the community at large, whether it’s here on DevCentral or in third party communities like Reddit, where member macallen posed the following problem in r/sysadmin a couple months back (paraphrased here, check the link for full context).

Problem Statement

I have a pool of five servers, and I need a maximum of one connection per server strictly enforced. When I set the connection limit to 1 at the node level, I’m still seeing a second connection offered when the 6th active request comes in. Any ideas on how I can accomplish this?

Diagnosing the Problem

First, I’ll mock this up in my lab, only on a smaller scale of two servers rather than five, and setting the connection limit on each server to one.

Using curl from two virtual machines, I run

several times and notice that I am seeing a max of two per server being enforced, not one as anticipated.

The problem here is not that TMM is failing to honor the connection limits. The problem, at least on my test system, is that there are two TMMs present. Each TMM is limiting the servers to a maximum of one connection, so in this case, two connections are allowed instead of the required one. And just like the statistical representation of a family consisting of 2.3 kids, well, there’s no such thing as .3 of a kid, and there’s no such thing as .5 of a connection, so setting that doesn’t make much sense and isn’t allowed anyway. 

The good news is that for almost all use cases at scale the BIG-IP does the math, taking maximum configured connections and dividing by the number of TMMs. Note that this can lead to unexpected issues if for some reason the disaggregator (DAG) has an uneven connection distribution, and it is generally recommended NOT to have a connection maximum less than the active number of TMM instances. See K8457 for additional details.

But now that the problem is known, what do I do about it?


Option #1 - Duct Tape & Chewing Gum!

In the Reddit thread, the original poster solved his own problem by, in his words, "I created a duct tape solution. I wrote a service that opens a port. When the user connects, it closes the port, when they disconnect it opens it back up. Then I created a contract in F5 for that port so it disables the node when the port is down. Cheap and dirty, but works."

Glad to hear that works, but not a process I’d recommend. If someone else takes over ownership of that application and has no idea why that service exists and thus removes it…outage city!

Option #2 - Configure BIG-IP VE for a Single Core

I call this the machete mode, where I just whack some compute cycles away to solve the problem. That’s an easy one! Shut down the image, strip it down to a single core, fire it back up, and presto! And if this was the only application in service, that would be fantastic. But that’s not likely, and so punishing the rest of the application delivery needs to meet this need is not a great solution.

Option #3 - Pin the Virtual Server to a Single Core with an iRule

This option requires no system changes at all, just a simple iRule using a global variable, as they are not CPM compatible and thus will demote any virtual server to a single TMM, effectively pinning it and solving the problem. The iRule could look something like this:

when RULE_INIT {
 set ::global_pin_tmm

This iRule is clean and compact, with no impact to traffic since its only engagement is at initialization. It also has a useful name, indicating it’s a global variable and its purpose is to pin the virtual server to a single TMM. Effective, but it feels a little icky to use an iRule with global variables in any version after 11.4 and one of my biggest messages when I speak at user groups is that “iRules are great! But don’t use them!” I always suggest the use of a configuration option when available, and only when iRules are necessary should they be utilized.

Option #4 - Pin the Virtual Server to a Single Core with a TMSH Command

That brings me to the final option I’ll explore, and that is to use a TMSH command to pin the virtual server. It’s an option on the virtual server (not available in the GUI) to disable CMP:

tmsh modify ltm virtual <virtual name> cap-enabled no

Super simple, crystal clear in the configuration, no Tcl-machine necessary. That sounds like a winner to me and is evident now in a new screen capture.


With BIG-IP, there are often many ways to approach a problem. Sometimes there are no clear advantages amongst solutions, but this problem has a clear winner and that is the final option presented here: using the tmsh command to disable CMP.

Updated Jun 06, 2023
Version 2.0

Was this article helpful?

No CommentsBe the first to comment