vmware
2 TopicsChecksums for F5 supported VMware vCenter cloud templates
Problem this snippet solves: Checksums for F5 supported VMware vCenter cloud templates F5 Networks provides checksums for all of our supported VMware vCenter templates (for other Cloud providers, see https://devcentral.f5.com/codeshare/checksums-for-f5-supported-cft-and-arm-templates-on-github-1014). See the README files on GitHub for information on individual templates. You can find the VMware templates in the appropriate supported directory on GitHub: https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported You can get a checksum for a particular template by running one of the following commands, depending on your operating system: Linux: sha512sum <path_to_template> Windows using CertUtil: CertUtil –hashfile <path_to_template> SHA512 You can compare the checksum produced by that command against the following list. To find your hash, copy the script-signature hash out of your template and search for it on this page. To find the script signature, click the link in the Solution File column (look closely at the path to find the template you are using) and search for script-signature. The hash immediately follows. Release 1.4.0 Solution File Hash https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/failover/same-net/traditional/4nic/existing-stack/f5-existing-stack-failover-4nic-bigip.js `38a48d1f93e91cafcbe3324f5705ea9573918f48e987f2ad127ff38b3de1a2bbe38225bb62be969bba4e6d655b40f8f5f0a4eec43f3b2999c03ddfc22a0b0744` https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/standalone/n-nic/existing-stack/f5-existing-stack-nNic-bigip.js `5b1ddbbe50a0986b4ef4a0d347f30d36e20cb36186f1741449b47cfe8651e57e13bd76ab21164c0b98730f202823b3aa747d232196d0cc32759f558124b2a973` Release 1.3.0 Solution File Hash https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/failover/same-net/traditional/4nic/existing-stack/f5-existing-stack-failover-4nic-bigip.js `75613cb46c9639c9e5632dd6099fa424e1b92c6cfd1e4996fc16a1186a3b73e7e9ea8008ab1a27bddf8d0324260d3e2678ed4b1ff5575cfe7f0a492087998cec` https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/standalone/n-nic/existing-stack/f5-existing-stack-nNic-bigip.js `b0dc9b2d814aff8598426b7b1c94d06bab58cc5d7d76b77d7df1251c782e534eb40655c8104911e9007ae9806daa0d02520c2c86816ec1f76721840bb3f4f324` Code : You can get a checksum for a particular template by running one of the following commands, depending on your operating system: * **Linux**: `sha512sum ` * **Windows using CertUtil**: `CertUtil –hashfile SHA512`256Views0likes0CommentsKill SNAT AutoMap!
Problem this snippet solves: What it does: Solves the need for getting the client ip address down into your servers for logging, etc with minimal configuration required by the engineer setting it up. No policy routes buried in your firewalls (but on your nodes), no making the BigIP the default gateway for your nodes, etc. Prerequisites: Known to work with RHEL variants at 6.5 (2.6.32 kernel) or better Known to work with BigIP LTM 11.5 or better Must have SelfIPs/Floater in same subnet of nodes utilizing this script Will work on virtual interfaces. Ex: eth0:0 Drawbacks: Must have SelfIPs/Floater in each subnet in which your nodes also exist SelfIPs must be in consecutive order. Ex: 10.10.10.10 and 10.10.10.11. (Standalones are exempt from this of course) Must have layer 2 capable gear sitting in front of the BigIP The IP ADRESSES AND MAC ADDRESSES BELOW have been scrambled, you will notice. Please bear that in mind. Example /etc/sysconfig/bigip-gw-config $ cat /etc/sysconfig/bigip-gw-config # Vars for bigip-gw init script # !! ATTENTION HUMAN : !! # Config is managed by Chef. # Local changes are fleeting. # IFACE_SELECTION="all" BIGIP_SELFIPS=(10.6X.2Y.200/2F-10.6X.2Y.201/25 10.64.21F.130/26-10.64.21F.131/26 10.64.196.211/26-10.64.196.212/26) BIGIP_FLOATERS=(10.6X.2Y.202/25 10.64.21F.132/26 10.64.19Z.210/26) BIGIP_MAC_ADDRS=(00:14:14:11:63:X2 00:F0:F6:ab:fc:0d 00:A4:14:XY:d2:af 00:14:@@:XY:d2:b9 00:dog:72:XY:d2:c3 00:F0:F6:ab:FX:b9) RT_TABLES=/etc/iproute2/rt_tables ###### # HA_ENABLED - A bit different here. Bash booleans dont work like true|false booleans. Thus # you need a function returning a value. # true - { return 0;} # false - ( return 1;} # *** NOTE *** # Im most cases, if HA_ENABLED is false, the BIGIP_FLOATERS entry should be an empty array. # **END NOTE** # # Sample HA_ENABLED entry: # HA_ENABLED() {return 0;} # BigIP is a HA Pair # -or- # HA_ENABLED() {return 1;} # BigIP is a standalone ###### HA_ENABLED() { return 0;} # true - BigIP is a HA Pair SHORT STORY:: Want a script which will incorporate iptables/iproute2 and layer2 to leverage a BigIP Floater as the conditional gateway for your linux nodes thereby bringing the real client source ip addr down to your pool member? Ex: BigIP Floaters: 10.64.21F.132/26 10.64.19Z.210/26 pool member: server1: addresses: eth0: 10.64.21F.180/26 eth1: 192.168.1.200/24 The rc script is ran on server1 which will read in the sample sysconfig shown above. Script will notice that BigIP Floater 10.64.21F.132/26 lives in the same subnet as server1's eth0 address. Via iptables/iproute2, the script will setup a "conditional gateway" to that floater for any traffic coming from one of the configured MAC addresses using the CONNMARK feature in iptables. Rinse/repeat for each iface on the server. In this case eth1 address is inspected and passed on since its subnet does not match any of the BigIP Subnets. *. Set SNAT to "None" on your virtual and watch the true source IP addresses roll in. LONG STORY:: So, in my world if you wanted/needed the client ip address to show up in server logs or be actionable against at the server level you had a few choices: Set the bigip as your default gateway Set XFF headers for http Have policy based routes upstream in your network gear - (which bit us more than once during t'shooting) A good first step... Looking for a better solution, we designed a solution which carved out a subnet for which the BigIP would be the gateway for an additional interface that would have to be added to each node. For example: server1 has IP 10.100.100.100 on eth0 with default gateway 10.100.100.1 living on a cisco device. We then added another interface, eth1, giving it IP address 10.100.25.100 with the default gateway for that iface being the floating IP address of our BigIP HA Pair. We then placed 10.100.25.100 in all of our pools, turned SNAT to "None" and voila! - source ip addresses flowing in. Beautiful. This worked great. It kinda stunk to add secondary/separate interfaces to baremetals/vms just for this but it worked and worked well. Carve out a /24, throw in your self IPs, your nodes, and then set the gateway via a bash rc script. ..but buy-in sucked! Well, retrofitting currently existing baremetals never..ever..happened. Sprinkle in the fact that we were/are moving into Softlayer, and we suddenly lost control of vlan and subnet creation (vxlan implementation has since resolved subnetting for us, but thats another story). Phase two of operation Death-to-SNAT was a go. "To the cloud!" - Taco MacArthur All new installations are now going into SoftLayer and we're using BigIP VEs. The incorporation this script fit in pretty well with our design constraints. As we use Chef for config mgmt of our infrastructure, it was much simpler to just code for which servers would get this script applied to it and not even have to bother with getting a network engineer to add another policy based route everytime a new subnet was added. Using iptables to watch all MAC addresses listed in the sysconfig file above, we can now set the policy-based route directly on the linux node (no escaping the policy route). As long as the MAC address is not coming from a SelfIP we can safely assume the traffic is coming from the outside interface of the active unit and route that traffic back to the Floater. SelfIP originating traffic is healthcheck traffic and that traffic needs to go out the default gateway, otherwise the node shows failed. You now have the true source ip address showing up on your node. Ex: BigIP Floaters: 10.64.21F.132/26 10.64.19Z.210/26 pool member: server1: addresses: eth0: 10.64.21F.180/26 eth1: 192.168.1.200/24 The rc script is ran on server1 which will read in the sample sysconfig shown above. Script will notice that BigIP Floater 10.64.21F.132/26 lives in the same subnet as server1's eth0 address. Via iptables/iproute2, the script will setup a "conditional gateway" to that floater for any traffic coming from one of the configured MAC addresses using the CONNMARK feature in iptables. Rinse/repeat for each iface on the server. In this case eth1 address is inspected and passed on since its subnet does not match any of the BigIP Subnets. Set SNAT to "None" on your virtual and watch the true source IP addresses roll in. Thats it. Works well for us. Hope it works well for you, too! How to use this snippet: Copy the contents of the bigip-gw entry from GitHub link below into /etc/init.d/bigip-gw Copy and adjust the sysconfig example above into /etc/sysconfig/bigip-gw-config. You may also find a sample at the GitHub link below Please note that the script is not long running. It just adds/removes entries from rt_tables and adds/removes route and rules and adds/removes iptables entries. Ensure iptables is enabled on the server (node/pool member) Enable and start /etc/init.d/bigip-gw: chmod 755 /etc/init.d/bigip-gw && chkconfig bigip-gw on && service bigip-gw start Finally, set SNAT to "None" once all your pool members have started the script Code : https://github.com/dfosborne2/F5-BigIP Tested this on version: 11.5982Views0likes2Comments