We’re trying to expose the management interfaces of IBM’s API Connect (2018.4.1.5, OVA-based, 3 vSphere hosted Ubuntu VMs with two network interfaces each) behind four different virtual servers in BIG-IP LTM (18.104.22.168). The pool members in these are the same IP addresses for all four.
Initially we’ve set the same SNAT pool with 10 IP addresses for each of these virtual servers.
But something happens that seems related to connections from source addresses from different network segments.
If clients connect from our client network and no other traffic hits the virtual server it seems to work, but as soon as the other traffic from the platform itself hits the same virtual server from a different network the connections seems to run out after a while. It seems to work for a while and then all the connections are reset by the server.
We’ve tried with a SNAT pool with only one address and that only makes it worse, the connections times out and becomes unusable almost instantly.
But when we created a separate SNAT pool with only one address for each of the virtual servers it works like expected. So basically separating the traffic hitting the backend server with different source IP address from BIG-IP based on the type of traffic, so client traffic comes from one SNAT address and platform API traffic from another.
The two interfaces on the Ubuntu VM doesn’t seem to cause the issue. We set up another environment with a single node and only one network interface and we still need to separate the traffic from the servers in the platform from the client traffic.
I know that it’s hard get an overview of our setup from a couple of paragraphs of text, but is there anyone who has had similar problems or has tips for troubleshooting the root cause of this?
Or recommendations if someone exposed API Connect behind BIG-IP LTM?