Forum Discussion
Creating a dynamic one-to-one SNAT pool
We have a problem with an SMB VIP that appears to be NAT related. What we are looking to do is create a SNAT pool in which each new IP address is assigned it's own SNAT address.
Is there any simple way to do this or do I need to create an Irule that will create SNAT entries on the fly and keep track of which ones are already set up?
25 Replies
- Brad_Parker
Cirrus
You will most likely need to use an iRule. Are you looking to map a /24 to a /24 or something similar? If so, you could write an iRule that can SNAT 10.0.0.x to 192.168.0.x, for example, by just translating the last digit of the address.
- Brian_Gibson_30
Nimbostratus
No. Basically we are having a problem in which NAT is confusing our server and dropping connections. Basically it is having trouble with multiple users connecting with the same source IP. So we want to create a pool in which each user will be assigned a unique source IP address so the NAT isn't overloaded. - Brad_Parker
Cirrus
Well that seems like a bad problem to have for the server to have. You will run out of IPs quickly, if the application is going to require a unique IP per session. What kind of application/web server are you running? Is it having NAT problems or port exhaustion problems? - Brian_Gibson_30
Nimbostratus
It isn't that bad a problem but it is a problem. This is an internal service so the number of users is limited to a few hundred. We will just glom a 10./8 address block and use that. What is being alleged is that the LB NAT of the connections is making the server drop connections. If you want a more detailed explanation it is based on this writeup... http://www.nynaeve.net/?p=93 We aren't 100% certain that this is the problem but we did see several writeups similar to this one and they all describe the problem we see.
- Brad_Parker_139
Nacreous
You will most likely need to use an iRule. Are you looking to map a /24 to a /24 or something similar? If so, you could write an iRule that can SNAT 10.0.0.x to 192.168.0.x, for example, by just translating the last digit of the address.
- Brian_Gibson_30
Nimbostratus
No. Basically we are having a problem in which NAT is confusing our server and dropping connections. Basically it is having trouble with multiple users connecting with the same source IP. So we want to create a pool in which each user will be assigned a unique source IP address so the NAT isn't overloaded. - Brad_Parker_139
Nacreous
Well that seems like a bad problem to have for the server to have. You will run out of IPs quickly, if the application is going to require a unique IP per session. What kind of application/web server are you running? Is it having NAT problems or port exhaustion problems? - Brian_Gibson_30
Nimbostratus
It isn't that bad a problem but it is a problem. This is an internal service so the number of users is limited to a few hundred. We will just glom a 10./8 address block and use that. What is being alleged is that the LB NAT of the connections is making the server drop connections. If you want a more detailed explanation it is based on this writeup... http://www.nynaeve.net/?p=93 We aren't 100% certain that this is the problem but we did see several writeups similar to this one and they all describe the problem we see.
- Brian_Gibson_30
Nimbostratus
Brad,
I am going to try this but if that doesn't work I'm thinking it might be easier for me to create a replace mask. So I have incoming request of 10.99.99.23 I could do a replacement mask of 172.23.. it would replace the source with 172.23.99.23.
While this won't guarantee unique source IP addresses, it should greatly reduce the problem.
How do I apply the pool to the SNAT? I'm a little confused about that. Wouldn't snat [IP::server_addr] assign the SNAT to be the destination server address?
- Brad_Parker
Cirrus
So what I suggested doesn't quite work the way I had hoped, but it can be "made" to work but it then gets to be more of a hacked together solution then its worth. You are probably better of doing a replacement mask like I first suggested and you suggest here. That's a far better and simple solution if you must SNAT the traffic to the SMB server. Alternatively if the LTM was the default gateway you would not need to SNAT that traffic at all.
- nitass
Employee
How do I apply the pool to the SNAT?
you can use simple irule something like this. you also need to make sure server knows where to send return traffic e.g. routing, arp.
configuration root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar ltm virtual bar { destination 172.28.24.10:80 ip-protocol tcp mask 255.255.255.255 pool foo profiles { fastL4 { } } rules { qux } source 0.0.0.0/0 vs-index 6 } root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo ltm pool foo { members { 200.200.200.101:80 { address 200.200.200.101 } } } root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux ltm rule qux { when CLIENT_ACCEPTED { scan [IP::client_addr] %*d.%*d.%d.%d c d snat 192.168.$c.$d } } trace [root@ve11b:Active:In Sync] config tcpdump -nni 0.0 port 80 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on 0.0, link-type EN10MB (Ethernet), capture size 96 bytes 13:38:36.981475 IP 172.28.24.1.50638 > 172.28.24.10.80: S 2519421337:2519421337(0) win 5840 13:38:36.981607 IP 192.168.24.1.50638 > 200.200.200.101.80: S 2519421337:2519421337(0) win 5840- StephanManthey
Nacreous
Using exactly this approach (with "getfield" but "scan"; but like the scan-based approach better (+1)) in a client´s environment. No problems by now and client has a one-by-one mapping allowing simplified troubleshooting. As nitass already pointed out, make sure to have a route back to the virtual address space used for SNAT. The route (on your peripheral components) will point to the serverside floating self IP of the BIG-IP.
- nitass_89166
Noctilucent
How do I apply the pool to the SNAT?
you can use simple irule something like this. you also need to make sure server knows where to send return traffic e.g. routing, arp.
configuration root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar ltm virtual bar { destination 172.28.24.10:80 ip-protocol tcp mask 255.255.255.255 pool foo profiles { fastL4 { } } rules { qux } source 0.0.0.0/0 vs-index 6 } root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo ltm pool foo { members { 200.200.200.101:80 { address 200.200.200.101 } } } root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux ltm rule qux { when CLIENT_ACCEPTED { scan [IP::client_addr] %*d.%*d.%d.%d c d snat 192.168.$c.$d } } trace [root@ve11b:Active:In Sync] config tcpdump -nni 0.0 port 80 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on 0.0, link-type EN10MB (Ethernet), capture size 96 bytes 13:38:36.981475 IP 172.28.24.1.50638 > 172.28.24.10.80: S 2519421337:2519421337(0) win 5840 13:38:36.981607 IP 192.168.24.1.50638 > 200.200.200.101.80: S 2519421337:2519421337(0) win 5840- StephanManthey
Nacreous
Using exactly this approach (with "getfield" but "scan"; but like the scan-based approach better (+1)) in a client´s environment. No problems by now and client has a one-by-one mapping allowing simplified troubleshooting. As nitass already pointed out, make sure to have a route back to the virtual address space used for SNAT. The route (on your peripheral components) will point to the serverside floating self IP of the BIG-IP.
- nitass
Employee
Is there any simple way to do this or do I need to create an Irule that will create SNAT entries on the fly and keep track of which ones are already set up?
the simplest way is to use cgnat persistence.
sol14823: Overview of CGNAT Persistence
https://support.f5.com/kb/en-us/solutions/public/14000/800/sol14823.htmlanyway, you can use irule to assign snat ip on the fly and keep track it. i think you need at least 3 tables to track it; one stores client ip (key) and snat ip (value), next keeps snat ip which is in use and the last one records all incoming client ports (snat ip can be released after all connections from client are closed). also, you have to refresh (touch) table entry to make sure it is not timeout.
here is my testing. please be noted that it may not be fully correct.
e.g.
configuration root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar ltm virtual bar { destination 172.28.24.10:23 ip-protocol tcp mask 255.255.255.255 pool foo profiles { fastL4 { } } rules { qux } source 0.0.0.0/0 vs-index 6 } root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm snatpool mysnat_pool ltm snatpool mysnat_pool { members { 200.200.200.22 200.200.200.33 200.200.200.44 } } root@(ve11b)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux ltm rule qux { when RULE_INIT { set static::snatpool "mysnat_pool" set static::snatpool_list [eval members -list $static::snatpool] set static::snatip_inuse "snatip_inuse" set static::client_inuse "client_inuse" set static::table_timeout 30 } when CLIENT_ACCEPTED { set snatip [table lookup [IP::client_addr]] if { $snatip eq "" } { foreach snatpool_mbr $static::snatpool_list { set snatip [lindex $snatpool_mbr 0] if { [table lookup -subtable $static::snatip_inuse $snatip] eq "" } { table set [IP::client_addr] $snatip $static::table_timeout table set -subtable ${static::client_inuse}_[IP::client_addr] [TCP::client_port] 1 $static::table_timeout table set -subtable $static::snatip_inuse $snatip 1 $static::table_timeout set monitor_id [\ after [expr {($static::table_timeout * 1000) / 2}] -periodic { log local0. "[IP::client_addr]:[TCP::client_port]: touch" table lookup [IP::client_addr] table lookup -subtable ${static::client_inuse}_[IP::client_addr] [TCP::client_port] table lookup -subtable $static::snatip_inuse $snatip }\ ] log local0. "[IP::client_addr]:[TCP::client_port]: new client [IP::client_addr]. snat ip $snatip is used." snat $snatip return } } event CLIENT_CLOSED disable log local0. "[IP::client_addr]:[TCP::client_port]: no snat ip is available for [IP::client_addr]. connection is rejected." reject } else { log local0. "[IP::client_addr]:[TCP::client_port]: existing client [IP::client_addr]. snat ip $snatip is used." table set -subtable ${static::client_inuse}_[IP::client_addr] [TCP::client_port] 1 $static::table_timeout table lookup -subtable $static::snatip_inuse $snatip set monitor_id [\ after [expr {($static::table_timeout * 1000) / 2}] -periodic { log local0. "[IP::client_addr]:[TCP::client_port]: touch" table lookup [IP::client_addr] table lookup -subtable ${static::client_inuse}_[IP::client_addr] [TCP::client_port] table lookup -subtable $static::snatip_inuse $snatip }\ ] snat $snatip } } when CLIENT_CLOSED { if { [table keys -subtable ${static::client_inuse}_[IP::client_addr] -count] == 1 } { log local0. "[IP::client_addr]:[TCP::client_port]: [IP::client_addr] is closed. snat ip [table lookup [IP::client_addr]] is released." table delete -subtable ${static::client_inuse}_[IP::client_addr] -all table delete -subtable $static::snatip_inuse $snatip table delete [IP::client_addr] } else { log local0. "[IP::client_addr]:[TCP::client_port]: [IP::client_addr]:[TCP::client_port] is closed." after cancel $monitor_id table delete -subtable ${static::client_inuse}_[IP::client_addr] [TCP::client_port] } } } /var/log/ltm [root@ve11b:Active:In Sync] config tail -f /var/log/ltm Feb 20 15:20:31 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.1:38318: new client 172.28.24.1. snat ip 200.200.200.44 is used. Feb 20 15:20:46 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.1:38318: touch Feb 20 15:21:01 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.1:38318: touch Feb 20 15:21:03 ve11b info tmm[14140]: Rule /Common/qux : 172.28.24.1:38319: existing client 172.28.24.1. snat ip 200.200.200.44 is used. Feb 20 15:21:16 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.1:38318: touch Feb 20 15:21:18 ve11b info tmm[14140]: Rule /Common/qux : 172.28.24.1:38319: touch Feb 20 15:21:24 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.15:39032: new client 172.28.24.15. snat ip 200.200.200.33 is used. Feb 20 15:21:31 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.1:38318: touch Feb 20 15:21:33 ve11b info tmm[14140]: Rule /Common/qux : 172.28.24.1:38319: touch Feb 20 15:21:39 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.15:39032: touch Feb 20 15:21:44 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.1:38318: 172.28.24.1:38318 is closed. Feb 20 15:21:48 ve11b info tmm[14140]: Rule /Common/qux : 172.28.24.1:38319: touch Feb 20 15:21:50 ve11b info tmm1[14140]: Rule /Common/qux : 172.28.24.15:39032: 172.28.24.15 is closed. snat ip 200.200.200.33 is released. Feb 20 15:21:52 ve11b info tmm[14140]: Rule /Common/qux : 172.28.24.1:38319: 172.28.24.1 is closed. snat ip 200.200.200.44 is released. - Brian_Gibson_30
Nimbostratus
Wow. Some great information here! Thanks so much Nitass and Brad.
I went with the KISS approach and implemented a simple mask. Given the relatively small number of users(less than 200) that are on only a relatively few subnets, this approach gave me a pretty simply SNAT approach without having to track the values. If I was working with a larger user base I likely would have needed to do what Nitass proposed, which is consdierably more elaborate than....
when CLIENT_ACCEPTED { scan [IP::client_addr] %d.%d.%d.%d 0 0 o3 o4 snat "10.73.$o3.$o4" }Thanks for all the helpful ideas though.
- kridsana
Cirrocumulus
Hi nitass
I try to use this irule but seem like i can only connect with one client.
BIG-IP version 11.5
not sure if this irule support this BIG-IP version?
Thank you
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com