Forum Discussion
Merge two separate TCP connections on the client side to one single threaded TCP connection on the server side.
I have tried OneConnect but the idle TCP timeout/reuse is not really what I am after. We have two servers that need to dual home their TCP connections on port destination 2066 to a terminal server. The problem is the terminal server is not multi-threaded on it listening process and cannot be changed. I need LTM to treat each client like a blind proxy.
Example - Server A source port 40001 to 2066 virtual server address. Server A source port 50002 to port 2066 virtual server address (same as A). LTM combines conversations to send from it's self IP port to destination port 2066 on terminal server. Actively passing PSH packets from Server A and Server B to the single threaded TCP connection on the terminal server and proxy-ing back the responses.
12 Replies
- Jeff_47438
Altocumulus
Sorry mistake in example when I re-read.
Example - Server "A" source port 40001 to 2066 virtual server address. Server "B" source port 50002 to port 2066 virtual server address (same as A). LTM combines conversations to send from it's self IP port to destination port 2066 on terminal server. Actively passing PSH packets from Server A and Server B to the single threaded TCP connection on the terminal server and proxy-ing back the responses.
- IheartF5_45022
Nacreous
Just because it's single threaded doesn't necessarily mean that it can't handle multiple TCP connections - have you actually checked this?
- Jeff_47438
Altocumulus
Per my original post this has been checked. The connection cannot be more than one tcp connection to the terminal server.
- Thomas_Gobet
Nimbostratus
Are your server on port 2066 webservers (HTTP traffic) ?
If it is, you can read this on Oneconnect profile : http://support.f5.com/kb/en-us/solutions/public/7000/200/sol7208.html- Ahmed_Eissa_206
Nimbostratus
the one connect profile is using persistence connection per every client... if another clients tries to connect it will start a new session for it
- Thomas_Gobet_91
Cirrostratus
Are your server on port 2066 webservers (HTTP traffic) ?
If it is, you can read this on Oneconnect profile : http://support.f5.com/kb/en-us/solutions/public/7000/200/sol7208.html- Ahmed_Eissa_206
Nimbostratus
the one connect profile is using persistence connection per every client... if another clients tries to connect it will start a new session for it
- Jeff_47438
Altocumulus
These are terminal servers for simple message desk notifications. On the other side of the terminal server is a 9600 baud modem circuit. This is the reason for the 1 to 1 relationship on port 2066.
- IheartF5_45022
Nacreous
Try something like this;
- Enable oneconnect (it's the only way you can prevent the serverside connection from closing when the client connection closes)
- Set your pool member connection limit to 1
-
Add iRule (I do not know why the formatting below has not worked!);
when CLIENT_ACCEPTED {
if {$debug} {log local0. "[IP::remote_addr]:[TCP::remote_port]"} while {[table lookup "fProxyinProgress"] != ""} { if {$debug} {log local0. "Request in progress - wait 50ms"} after 50 } if {$debug} {log local0. "Setting flag for [IP::remote_addr]:[TCP::remote_port]"} table set "fProxyinProgress" "1" 1}
when CLIENT_CLOSED {
if {$debug} {log local0. "Release flag[IP::remote_addr]:[TCP::remote_port]"} table delete "fProxyinProgress"}
It's using the session table as a crude semaphore - crude because the waiting/setting operation (in CLIENT_ACCEPTED) really needs to be atomic to be foolproof.
Your clients would need to close each tcp connection each time they received a response for this to work (I don't know how that's going to sit with you), and you'd need to do some fairly serious tinkering with both the client and serverside tcp profiles.
Any suggestions from other devecentralers welcome!!
- Jeff_47438
Altocumulus
So tried this but the clients do not close. They are designed to be always on connections and there does not seem to be any ability to change the client settings in this regard.
- IheartF5_45022
Nacreous
One last crack at this. I haven't tested it sorry, but the logic makes sense to me - basically each TCP connection, the 2 * clientside ones and the serverside one, will spend nearly all their time in blocking mode 'collecting' data from their respective endpoints. When they have client data to send they have to grab a flag indicating that they have the serverside connection now.
If this doesn't work I can raise another forum question pointing to this one to see if anyone else has any ideas.
when CLIENT_ACCEPTED { Collect the whole request (this will trigger CLIENT_DATA) if {$debug} {log local0. "[IP::remote_addr]:[TCP::remote_port]"} TCP::collect } when CLIENT_DATA { We have the whole response - wait for the serverside connection to be free, then release the request, and release the session table lock. Then collect again. while {[table lookup "fProxyinProgress"] != ""} { if {$debug} {log local0. "Request in progress - wait 50ms"} after 50 } if {$debug} {log local0. "Setting flag for [IP::remote_addr]:[TCP::remote_port]"} Set session table lock table set "fProxyinProgress" "1" 1 TCP::release TCP::collect } when SERVER_CONNECTED { TCP::collect } when SERVER_DATA { We have the whole response - release it, and release the session table lock table delete "fProxyinProgress" TCP::release Start collecting the next server response TCP::collect } - F5_Jeff
Cirrus
have you tried using the multipath TCP in the tcp profile?
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com