auto map
6 TopicsKill SNAT AutoMap!
Problem this snippet solves: What it does: Solves the need for getting the client ip address down into your servers for logging, etc with minimal configuration required by the engineer setting it up. No policy routes buried in your firewalls (but on your nodes), no making the BigIP the default gateway for your nodes, etc. Prerequisites: Known to work with RHEL variants at 6.5 (2.6.32 kernel) or better Known to work with BigIP LTM 11.5 or better Must have SelfIPs/Floater in same subnet of nodes utilizing this script Will work on virtual interfaces. Ex: eth0:0 Drawbacks: Must have SelfIPs/Floater in each subnet in which your nodes also exist SelfIPs must be in consecutive order. Ex: 10.10.10.10 and 10.10.10.11. (Standalones are exempt from this of course) Must have layer 2 capable gear sitting in front of the BigIP The IP ADRESSES AND MAC ADDRESSES BELOW have been scrambled, you will notice. Please bear that in mind. Example /etc/sysconfig/bigip-gw-config $ cat /etc/sysconfig/bigip-gw-config # Vars for bigip-gw init script # !! ATTENTION HUMAN : !! # Config is managed by Chef. # Local changes are fleeting. # IFACE_SELECTION="all" BIGIP_SELFIPS=(10.6X.2Y.200/2F-10.6X.2Y.201/25 10.64.21F.130/26-10.64.21F.131/26 10.64.196.211/26-10.64.196.212/26) BIGIP_FLOATERS=(10.6X.2Y.202/25 10.64.21F.132/26 10.64.19Z.210/26) BIGIP_MAC_ADDRS=(00:14:14:11:63:X2 00:F0:F6:ab:fc:0d 00:A4:14:XY:d2:af 00:14:@@:XY:d2:b9 00:dog:72:XY:d2:c3 00:F0:F6:ab:FX:b9) RT_TABLES=/etc/iproute2/rt_tables ###### # HA_ENABLED - A bit different here. Bash booleans dont work like true|false booleans. Thus # you need a function returning a value. # true - { return 0;} # false - ( return 1;} # *** NOTE *** # Im most cases, if HA_ENABLED is false, the BIGIP_FLOATERS entry should be an empty array. # **END NOTE** # # Sample HA_ENABLED entry: # HA_ENABLED() {return 0;} # BigIP is a HA Pair # -or- # HA_ENABLED() {return 1;} # BigIP is a standalone ###### HA_ENABLED() { return 0;} # true - BigIP is a HA Pair SHORT STORY:: Want a script which will incorporate iptables/iproute2 and layer2 to leverage a BigIP Floater as the conditional gateway for your linux nodes thereby bringing the real client source ip addr down to your pool member? Ex: BigIP Floaters: 10.64.21F.132/26 10.64.19Z.210/26 pool member: server1: addresses: eth0: 10.64.21F.180/26 eth1: 192.168.1.200/24 The rc script is ran on server1 which will read in the sample sysconfig shown above. Script will notice that BigIP Floater 10.64.21F.132/26 lives in the same subnet as server1's eth0 address. Via iptables/iproute2, the script will setup a "conditional gateway" to that floater for any traffic coming from one of the configured MAC addresses using the CONNMARK feature in iptables. Rinse/repeat for each iface on the server. In this case eth1 address is inspected and passed on since its subnet does not match any of the BigIP Subnets. *. Set SNAT to "None" on your virtual and watch the true source IP addresses roll in. LONG STORY:: So, in my world if you wanted/needed the client ip address to show up in server logs or be actionable against at the server level you had a few choices: Set the bigip as your default gateway Set XFF headers for http Have policy based routes upstream in your network gear - (which bit us more than once during t'shooting) A good first step... Looking for a better solution, we designed a solution which carved out a subnet for which the BigIP would be the gateway for an additional interface that would have to be added to each node. For example: server1 has IP 10.100.100.100 on eth0 with default gateway 10.100.100.1 living on a cisco device. We then added another interface, eth1, giving it IP address 10.100.25.100 with the default gateway for that iface being the floating IP address of our BigIP HA Pair. We then placed 10.100.25.100 in all of our pools, turned SNAT to "None" and voila! - source ip addresses flowing in. Beautiful. This worked great. It kinda stunk to add secondary/separate interfaces to baremetals/vms just for this but it worked and worked well. Carve out a /24, throw in your self IPs, your nodes, and then set the gateway via a bash rc script. ..but buy-in sucked! Well, retrofitting currently existing baremetals never..ever..happened. Sprinkle in the fact that we were/are moving into Softlayer, and we suddenly lost control of vlan and subnet creation (vxlan implementation has since resolved subnetting for us, but thats another story). Phase two of operation Death-to-SNAT was a go. "To the cloud!" - Taco MacArthur All new installations are now going into SoftLayer and we're using BigIP VEs. The incorporation this script fit in pretty well with our design constraints. As we use Chef for config mgmt of our infrastructure, it was much simpler to just code for which servers would get this script applied to it and not even have to bother with getting a network engineer to add another policy based route everytime a new subnet was added. Using iptables to watch all MAC addresses listed in the sysconfig file above, we can now set the policy-based route directly on the linux node (no escaping the policy route). As long as the MAC address is not coming from a SelfIP we can safely assume the traffic is coming from the outside interface of the active unit and route that traffic back to the Floater. SelfIP originating traffic is healthcheck traffic and that traffic needs to go out the default gateway, otherwise the node shows failed. You now have the true source ip address showing up on your node. Ex: BigIP Floaters: 10.64.21F.132/26 10.64.19Z.210/26 pool member: server1: addresses: eth0: 10.64.21F.180/26 eth1: 192.168.1.200/24 The rc script is ran on server1 which will read in the sample sysconfig shown above. Script will notice that BigIP Floater 10.64.21F.132/26 lives in the same subnet as server1's eth0 address. Via iptables/iproute2, the script will setup a "conditional gateway" to that floater for any traffic coming from one of the configured MAC addresses using the CONNMARK feature in iptables. Rinse/repeat for each iface on the server. In this case eth1 address is inspected and passed on since its subnet does not match any of the BigIP Subnets. Set SNAT to "None" on your virtual and watch the true source IP addresses roll in. Thats it. Works well for us. Hope it works well for you, too! How to use this snippet: Copy the contents of the bigip-gw entry from GitHub link below into /etc/init.d/bigip-gw Copy and adjust the sysconfig example above into /etc/sysconfig/bigip-gw-config. You may also find a sample at the GitHub link below Please note that the script is not long running. It just adds/removes entries from rt_tables and adds/removes route and rules and adds/removes iptables entries. Ensure iptables is enabled on the server (node/pool member) Enable and start /etc/init.d/bigip-gw: chmod 755 /etc/init.d/bigip-gw && chkconfig bigip-gw on && service bigip-gw start Finally, set SNAT to "None" once all your pool members have started the script Code : https://github.com/dfosborne2/F5-BigIP Tested this on version: 11.5965Views0likes2CommentsHow to Determine Public IP when using a AutoMap SNAT with TCPDUMP?
All, I have a situation where I am trying to determine the Client IP when using AutoMaP on my VIP. I can find the packets I am interested in as they pass from the AutoMap IP to the Pool Members using TCPDUMP. Obviously the SRC IP in my captures always show the F5 AutoMap IP. Is there any way to follow sequence numbers or something else that would reveal the packet as it came to the VIP, if I have packet info going to the Pool Members? What is odd is that I find the packets with Source of the AutoMap and Destination of pool members (not always the same member). In the packet details I find the info I am looking for in this case an FTP login attempt that fails. But if I filter my TCPDUMP using the VIP I never find any of the same kind of payload I see when I filter on the bad login attempt that happens over and over. What could I be missing, at first I thought someone internal was going directly to the server, but if that were true I would expect to see that LAN clients IP instead of the AutoMap... hmmm unless they are in a different subnet and still needing AutoMap. That of course takes me back to the original question... how the heck to I match up capture data coming to a pool member with data coming into VIP? Hopefully this is not stupid.. I figure there has to be away And no, we can't turn of AutoMap for use X-Forward etc. as this is FTP. I am happy to provide capture detail if needed. Raymond345Views0likes1CommentAsymmetric traffic because of closing SAT on VIP
Hi all, I closed SAT (from Automap to None) on the DNS VIP, because of passing through the source IP addresses which make DNS queries to make the Qradar logs meaningfull. I also set the DGW of the DNS nodes behind the VIP as F5 self IP. We started to take the DNS logs to Qradar with the source of the queries, but i realized that the DNS doesn't work for the clients/servers which are at the same subnet with the DNS servers behind the VIP no longer. Because, the DNS servers are returning directly to the client/servers which are at the same subnet, not returning to F5. I have a workaround solution for that case (creating another DNS VIP with the same nodes and setting the SAT as Automap) but with this solution we cannot get the logs for the relative subnet. Any solution to prevent this asymmetric traffic without openning the SAT? Thanks281Views0likes0CommentsAsymmetric traffic because of closing SAT on VIP
Hi all, I closed SAT (from Automap to None) on the DNS VIP, because of passing through the source IP addresses which make DNS queries to make the Qradar logs meaningfull. I also set the DGW of the DNS nodes behind the VIP as F5 self IP. We started to take the DNS logs to Qradar with the source of the queries, but i realized that the DNS doesn't work for the clients/servers which are at the same subnet with the DNS servers behind the VIP no longer. Because, the DNS servers are returning directly to the client/servers which are at the same subnet, not returning to F5. I have a workaround solution for that case (creating another DNS VIP with the same nodes and setting the SAT as Automap) but with this solution we cannot get the logs for the relative subnet. Any solution to prevent this asymmetric traffic without openning the SAT? (BIG IP LTM 12.1.0) Thanks213Views0likes0Comments