snat automap
11 Topicstcp_tw_recycle with SNAT
We have very high volume of TIME_WAIT and we are planning to use tcp_tw_recycle TCP setting on Apache Web server so we trash connection faster but many folks saying it will create issue with NAT so before enable i need expert advice should it be safe it to use with LTM (SNAT)?296Views0likes1CommentX-Forwarded-For header
Hi All, My application team requirement is to able to see the actual client ip address whoever accessing the application instead of BIG IP address as SNAT (Auto map) is enabled. I have read some SOL on it and understand that we can achieve this by iRule & HTTP profile. However, my requirement is to have an iRule as we can take decision whether to add X-Forwarded-For header to client requests. Can anyone please share the iRule script pertaining to this requirement. Thanks in advance, MSK387Views0likes11Commentsclients and servers are on the same network
Hi folks, We have a load balancing scenario where the Client(10.10.10.10) and nodes (10.10.10.20 and 10.10.10.30) in the same network 10.10.10.0/24. Where the VIP is 20.20.20.20. I am using snat automap with the self ip as 10.10.10.5. When the client initiates the connection to the vip I am seeing the traffic is hitting f5 in the tcpdump, also I am seeing the backend connection from the f5 self ip (10.10.10.5) to one of the nodes(say 10.10.10.20) and finally from the node(10.10.10.20) to the f5 self ip(10.10.10.5) which is perfectly fine. Then I am also seeing some thing strange in the capture where the self ip (10.10.10.5) is directly talking to the client(10.10.10.10). This makes the connection from client unsuccessful all the time. Why is the self ip is trying to talk to client directly? as the client should get the response back from vip instead of self ip. Any suggestions for this asymmetric packet flow. I have tried with snat pool with the ip in same subnet still the issue remains same. Am I missing any thing? Thanks315Views0likes2CommentsUnable to access virtual server over port 53
I currently have virtual server set up a load balance across three DNS servers. If I issue command "nslookup www.google.com [IP of VS]" from a client machine I'm getting a DNS request time out error. I've verified that the vIP is reachable from the client and it's operational on the BIG-IP. The DNS servers are reachable on the BIG-IP as well and are passing the monitor associated with the pool.485Views0likes5CommentsKill SNAT AutoMap!
Problem this snippet solves: What it does: Solves the need for getting the client ip address down into your servers for logging, etc with minimal configuration required by the engineer setting it up. No policy routes buried in your firewalls (but on your nodes), no making the BigIP the default gateway for your nodes, etc. Prerequisites: Known to work with RHEL variants at 6.5 (2.6.32 kernel) or better Known to work with BigIP LTM 11.5 or better Must have SelfIPs/Floater in same subnet of nodes utilizing this script Will work on virtual interfaces. Ex: eth0:0 Drawbacks: Must have SelfIPs/Floater in each subnet in which your nodes also exist SelfIPs must be in consecutive order. Ex: 10.10.10.10 and 10.10.10.11. (Standalones are exempt from this of course) Must have layer 2 capable gear sitting in front of the BigIP The IP ADRESSES AND MAC ADDRESSES BELOW have been scrambled, you will notice. Please bear that in mind. Example /etc/sysconfig/bigip-gw-config $ cat /etc/sysconfig/bigip-gw-config # Vars for bigip-gw init script # !! ATTENTION HUMAN : !! # Config is managed by Chef. # Local changes are fleeting. # IFACE_SELECTION="all" BIGIP_SELFIPS=(10.6X.2Y.200/2F-10.6X.2Y.201/25 10.64.21F.130/26-10.64.21F.131/26 10.64.196.211/26-10.64.196.212/26) BIGIP_FLOATERS=(10.6X.2Y.202/25 10.64.21F.132/26 10.64.19Z.210/26) BIGIP_MAC_ADDRS=(00:14:14:11:63:X2 00:F0:F6:ab:fc:0d 00:A4:14:XY:d2:af 00:14:@@:XY:d2:b9 00:dog:72:XY:d2:c3 00:F0:F6:ab:FX:b9) RT_TABLES=/etc/iproute2/rt_tables ###### # HA_ENABLED - A bit different here. Bash booleans dont work like true|false booleans. Thus # you need a function returning a value. # true - { return 0;} # false - ( return 1;} # *** NOTE *** # Im most cases, if HA_ENABLED is false, the BIGIP_FLOATERS entry should be an empty array. # **END NOTE** # # Sample HA_ENABLED entry: # HA_ENABLED() {return 0;} # BigIP is a HA Pair # -or- # HA_ENABLED() {return 1;} # BigIP is a standalone ###### HA_ENABLED() { return 0;} # true - BigIP is a HA Pair SHORT STORY:: Want a script which will incorporate iptables/iproute2 and layer2 to leverage a BigIP Floater as the conditional gateway for your linux nodes thereby bringing the real client source ip addr down to your pool member? Ex: BigIP Floaters: 10.64.21F.132/26 10.64.19Z.210/26 pool member: server1: addresses: eth0: 10.64.21F.180/26 eth1: 192.168.1.200/24 The rc script is ran on server1 which will read in the sample sysconfig shown above. Script will notice that BigIP Floater 10.64.21F.132/26 lives in the same subnet as server1's eth0 address. Via iptables/iproute2, the script will setup a "conditional gateway" to that floater for any traffic coming from one of the configured MAC addresses using the CONNMARK feature in iptables. Rinse/repeat for each iface on the server. In this case eth1 address is inspected and passed on since its subnet does not match any of the BigIP Subnets. *. Set SNAT to "None" on your virtual and watch the true source IP addresses roll in. LONG STORY:: So, in my world if you wanted/needed the client ip address to show up in server logs or be actionable against at the server level you had a few choices: Set the bigip as your default gateway Set XFF headers for http Have policy based routes upstream in your network gear - (which bit us more than once during t'shooting) A good first step... Looking for a better solution, we designed a solution which carved out a subnet for which the BigIP would be the gateway for an additional interface that would have to be added to each node. For example: server1 has IP 10.100.100.100 on eth0 with default gateway 10.100.100.1 living on a cisco device. We then added another interface, eth1, giving it IP address 10.100.25.100 with the default gateway for that iface being the floating IP address of our BigIP HA Pair. We then placed 10.100.25.100 in all of our pools, turned SNAT to "None" and voila! - source ip addresses flowing in. Beautiful. This worked great. It kinda stunk to add secondary/separate interfaces to baremetals/vms just for this but it worked and worked well. Carve out a /24, throw in your self IPs, your nodes, and then set the gateway via a bash rc script. ..but buy-in sucked! Well, retrofitting currently existing baremetals never..ever..happened. Sprinkle in the fact that we were/are moving into Softlayer, and we suddenly lost control of vlan and subnet creation (vxlan implementation has since resolved subnetting for us, but thats another story). Phase two of operation Death-to-SNAT was a go. "To the cloud!" - Taco MacArthur All new installations are now going into SoftLayer and we're using BigIP VEs. The incorporation this script fit in pretty well with our design constraints. As we use Chef for config mgmt of our infrastructure, it was much simpler to just code for which servers would get this script applied to it and not even have to bother with getting a network engineer to add another policy based route everytime a new subnet was added. Using iptables to watch all MAC addresses listed in the sysconfig file above, we can now set the policy-based route directly on the linux node (no escaping the policy route). As long as the MAC address is not coming from a SelfIP we can safely assume the traffic is coming from the outside interface of the active unit and route that traffic back to the Floater. SelfIP originating traffic is healthcheck traffic and that traffic needs to go out the default gateway, otherwise the node shows failed. You now have the true source ip address showing up on your node. Ex: BigIP Floaters: 10.64.21F.132/26 10.64.19Z.210/26 pool member: server1: addresses: eth0: 10.64.21F.180/26 eth1: 192.168.1.200/24 The rc script is ran on server1 which will read in the sample sysconfig shown above. Script will notice that BigIP Floater 10.64.21F.132/26 lives in the same subnet as server1's eth0 address. Via iptables/iproute2, the script will setup a "conditional gateway" to that floater for any traffic coming from one of the configured MAC addresses using the CONNMARK feature in iptables. Rinse/repeat for each iface on the server. In this case eth1 address is inspected and passed on since its subnet does not match any of the BigIP Subnets. Set SNAT to "None" on your virtual and watch the true source IP addresses roll in. Thats it. Works well for us. Hope it works well for you, too! How to use this snippet: Copy the contents of the bigip-gw entry from GitHub link below into /etc/init.d/bigip-gw Copy and adjust the sysconfig example above into /etc/sysconfig/bigip-gw-config. You may also find a sample at the GitHub link below Please note that the script is not long running. It just adds/removes entries from rt_tables and adds/removes route and rules and adds/removes iptables entries. Ensure iptables is enabled on the server (node/pool member) Enable and start /etc/init.d/bigip-gw: chmod 755 /etc/init.d/bigip-gw && chkconfig bigip-gw on && service bigip-gw start Finally, set SNAT to "None" once all your pool members have started the script Code : https://github.com/dfosborne2/F5-BigIP Tested this on version: 11.5990Views0likes2CommentsRoute domain is not compatible with snat list global
We have problem Route domain is not compatible with snat list global Solution f5 is one arm Existing configuration is we have snat list global for virtual server Now i create new VLAN and route domain when we create snat list global with new route domain is not work Please let me know about this problem220Views0likes0CommentsSNAT Issue with two virtual servers
I’m having an issue wrapping my head around setting up SNAT. I think SNAT is what I need. Here is my setup 192.168.103.125 – ip of server hosting IIS site www.siteA.com 192.168.103.1 Default Gateway on server A which is the F5 192.168.100.141 – ip of Virtual server in F5 for siteA 192.168.103.211 – ip of server hosting IIS site www.siteB.com 192.168.103.1 Default Gateway on server B which is the F5 192.168.100.140 – ip of Virtual server in F5 for site B If I try to browse to www.siteB.com from site A server. It won’t work If I try to browse to www.siteA.com from site B server. It won’t work The only way I can get it to work is to create a static route like this to force the destination server to route any traffic back to the source to use to VIP. On server B, I make a route – (route add 192.168.103.125 mask 255.255.255.255 192.168.100.141) If I add the above on server B, I can then browse to www.siteB.com from server A I read through https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_configuration_guide_10_0_0/ltm_snat.html1199363 But I’m unsure exactly what to setup. One other thing to add. For some reason when our F5’s were setup years ago. Any of the webservers use route domain 1. I don’t know if that is part of the problem or not Appreciate any help.422Views0likes9CommentsBIG-IP : virtual-server configuration for snat
BIG-IP 11.4.0 Build 2384.0 Final vip-external-01 is enabled for vlan-external-01 and routes to pool-01 whose members live on vlan-internal-01. vip-external-01 has snat auto-map enabled. vip-internal-01 is enabled for vlan-internal-01 and should be chosen as self-ip for traffic routed by vip-external-01 to pool-01 On vip-external-01 , is it also necessary to enable vlan-internal-01 ? And on vip-internal-01 , is it also necessary to enable vlan-external-01 ? More generally speaking, how to configure a simple network to support a browser-client request sent to vip and routed to the destination web-server with response traveling the reverse path ?338Views0likes4CommentsBIG-IP : how to determine Self-IP used by SNAT ?
BIG-IP 11.4.0 Build 2384.0 Final On my Virtual Server , I have Source Address Translation = Auto-Map ( I believe this is the same as SNAT , correct ? ) How to determine the Self IP that BIG-IP will substitute-in as the origin IP when routing a request to a pool ?232Views0likes2CommentsSNAT POOL AUTOMAP ISSUE
Hi: Here is the topology: Client-192.168.81.61--------F5-130.97.120.19---------------Server-130.97.121.131 the client(192.168.81.61) want to connect the server(130.97.121.131) with a virtual ip 192.168.120.131:9000.For this purpose,I configure a standard VS at LTM using a vitual IP 192.168.120.131:9000.If I choose AUTOMAP as my SNAT POOL,the connection is fine,but the source ip will translate to 130.97.120.19,and I really don't want this happen.If I set SNAT POOL to NONE,then the source IP remain to 192.168.81.61,but the tcp connection will fail...In order to find out what's going on,I do some captue in both client and server For client, I can see these packet: 192.168.81.61----SYN---->192.168.120.131 192.168.120.131---SYN ACK---->192.168.81.61 192.168.81.61---ACK--->192.168.120.131 For server, I can only see these packet 192.168.81.61---SYN--->130.97.121.131 130.97.121.131---SYN ACK--->192.168.81.61 Apparently,the ACK from F5 to server is missing,I don't know why F5 wouldn't send the ACK.But when I used the AUTO MAP at SNAT POOL,F5 would send the ACK,that's why the connection can be success. Have anybody met this issue before? Appreciate for your helpging.612Views0likes8Comments