BIG-IP Configuration Conversion Scripts
Kirk Bauer, John Alam, and Pete White created a handful of perl and/or python scripts aimed at easing your migration from some of the “other guys” to BIG-IP.While they aren’t going to map every nook and cranny of the configurations to a BIG-IP feature, they will get you well along the way, taking out as much of the human error element as possible.Links to the codeshare articles below. Cisco ACE (perl) Cisco ACE via tmsh (perl) Cisco ACE (python) Cisco CSS (perl) Cisco CSS via tmsh (perl) Cisco CSM (perl) Citrix Netscaler (perl) Radware via tmsh (perl) Radware (python)1.7KViews1like13CommentsA Brief Introduction To External Application Verification Monitors
Background EAVs (External Application Verification) monitors are one of most useful and extensible features of the BIG-IP product line. They give the end user the ability to utilize the underlying Linux operating system to perform complex and thorough service checks. Given a service that does not have a monitor provided, a lot of users will assign the closest related monitor and consider the solution complete. There are more than a few cases where a TCP or UDP monitor will mark a service “up” even while the service is unresponsive. EAVs give us the ability to dive much deeper than merely performing a 3-way handshake and neglecting the other layers of the application or service. How EAVs Work An EAV monitor is an executable script located on the BIG-IP’s file system (usually under /usr/bin/monitors) that is executed at regular intervals by the bigd daemon and reports its status. One of the most common misconceptions (especially amongst those with *nix backgrounds) is that the exit status of the script dictates the fate of the pool member. The exit status has nothing to do with how bigd interprets the pool member’s health. Any output to stdout (standard output) from the script will mark the pool member “up”. This is a nuance that should receive special attention when architecting your next EAV. Analyze each line of your script and make sure nothing will inadvertently get directed to stdout during monitor execution. The most common example is when someone writes a script that echoes “up” when the checks execute correctly and “down” when they fail. The pool member will be enabled by the BIG-IP under both circumstances rendering a useless monitor. Bigd automatically provides two arguments to the EAV’s script upon execution: node IP address and node port number. The node IP address is provided with an IPv6 prefix that may need to be removed in order for the script to function correctly. You’ll notice we remove the “::ffff://” prefix with a sed substitution in the example below. Other arguments can be provided to the script when configured in the UI (or command line). The user-provided arguments will have offsets of $3, $4, etc. Without further ado, let’s take a look at a service-specific monitor that gives us a more complete view of the application’s health. An Example I have seen on more than one occasion where a DNS pool member has successfully passed the TCP monitor, but the DNS service was unresponsive. As a result, a more invasive inspection is required to make sure that the DNS service is in fact serving valid responses. Let’s take a look at an example: #!/bin/bash # $1 = node IP # $2 = node port # $3 = hostname to resolve [[ $# != 3 ]] && logger -p local0.error -t ${0##*/} -- "usage: ${0##*/} <node IP> <node port> <hostname to resolve>" && exit 1 node_ip=$(echo $1 | sed 's/::ffff://') dig +short @$node_ip $3 IN A &> /dev/null [[ $? == 0 ]] && echo “UP” We are using the dig (Domain Information Groper) command to query our DNS server for an A record. We use the exit status from dig to determine if the monitor will pass. Notice how the script will never output anything to stdout other than “UP” in the case of success. If there aren’t enough arguments for the script to proceed, we output the usage to /var/log/ltm and exit. This is a very simple 13 line script, but effective example. The Takeaways The command should be as lightweight and efficient as possible If the same result can be accomplished with a built-in monitor, use it EAV monitors don’t rely on the command’s exit status, only standard output Send all error and informational messages to logger instead of stdout or stderr (standard error) “UP” has no significance, it is just a series of character sent to stdout, the monitor would still pass if the script echoed “DOWN” Conclusion When I first discovered EAV monitors, it opened up a whole realm of possibilities that I could not accomplish with built in monitors. It gives you the ability to do more thorough checking as well as place logic in your monitors. While my example was a simple bash script, BIG-IP also ships with Perl and Python along with their standard libraries, which offer endless possibilities. In addition to using the built-in commands and libraries, it would be just as easy to write a monitor in a compiled language (C, C++, or whatever your flavor may be) and statically compile it before uploading it to the BIG-IP. If you are new to EAVs, I hope this gives you the tools to make your environments more robust and resilient. If you’re more of a seasoned veteran, we’ll have more fun examples in the near future.2.1KViews0likes7CommentsDHCP Relay Virtual Server
BIG-IP LTM version 11.1 introduces the DHCP Relay Virtual Server. Previously, it was possible to forward the requests with a set of extensive iRules that probed deeply into the ways of binary, but with the new virtual server style, it is trivial. How DHCP Works DHCP is defined in RFC 2131 and RFC 2132 for clients and servers, as well as RFC 1542 for relay agents. The basic (successful) operation of a DHCP transaction between client and server is shown below. A client issues a broadcast in the DHCP Discover, one or more DHCP servers respond with an offer, the client responds with the binding IP address, and the server acknowledges. A DHCP Relay comes into play when a network grows beyond a handful of subnets and centralized control is desired. Because a DHCP Discover is a broadcast packet, it would never reach a centralized server as the packet would never cross the broadcast domain into another segment. So the job of a relay is to take that broadcast and package it as a unicast request and send on to the defined dhcp servers. Consider the test lab below: I have two dhcp servers configured on one side of a BIG-IP LTM VE, and a client configured for dhcp on the other. With no configuration on the LTM, the LTM receives the broadcast, but does nothing with it: 09:37:11.596823 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:14.689826 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:20.522498 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:27.609915 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:42.846379 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 Creating the Configuration The configuration is very simple. Create a pool of your dhcp servers, assigning IP and port as appropriate (port 67 for IPv4, port 547 for IPv6). The LB algorithm doesn’t matter, as all servers will receive the request. The virtual server configuration is equally simple. Name it, select the type as DHCP Relay, and then choose the IPv4 or IPv6 destination. Also, define the vlans this virtual should listen on. In my case, net106 where my dhcp client resides. Now, a dhcp discover from my client is forwarded as expected to my dhcp servers: 08:57:30.176648 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 08:57:30.176766 IP 192.168.106.5.bootps > 192.168.40.102.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 08:57:30.176771 IP 192.168.106.5.bootps > 192.168.40.103.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 Caveats In a chained configuration where there are multiple BIG-IP LTM’s between client and server, it will be necessary to preserve the source of the originating relay agent (the self IP of the first BIG-IP LTM receiving the broadcast). This is accomplished with a no-translate snat address: ltm snat dhcp-no-translate { origins { 192.168.106.5/32 { } } translation /Common/192.168.106.5 } as well as a now-unicast dhcp relay on the second BIG-IP LTM as shown in the diagram below For dhcp lease renewal, which is unicast, a forwarding virtual server should be configured (0.0.0.0:67/0.0.0.0) and a no-translate snat should be in place as well. Note that this is an optional parameter. If it is unsuccessful, the client will revert to broadcast. A request for enhancement that would include the no-translate and dhcp renewal configuration as part of the dhcp relay virtual server type selection has been submitted for consideration for future versions.2.2KViews0likes12CommentsWriting to and rotating custom log files
Sometimes I need to log information from iRules to debug something. So I add a simple log statement, like this: when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "[IP::remote_addr] attempted to access /secure" } } This is fine, but it clutters up the /var/log/ltm log file. Ideally I want to log this information into a separate log file. To accomplish this, I first change the log statement to incorporate a custom string - I chose the string "##": when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "##[IP::remote_addr] attempted to access /secure" } } Now I have to customize syslog to catch this string, and send it somewhere other than /var/log/ltm. I do this by customizing syslog with an include statement: tmsh modify sys syslog include '" filter f_local0 { facility(local0) and not match(\": ##\"); }; filter f_local0_customlog { facility(local0) and match(\": ##\"); }; destination d_customlog { file(\"/var/log/customlog\" create_dirs(yes)); }; log { source(local); filter(f_local0_customlog); destination(d_customlog); }; "' save the configuration change: tmsh save / sys config and restarting the syslog-ng service: tmsh restart sys service syslog-ng The included "f_local0" filter overrides the built-in "f_local0" syslog-ng filter, since the include statement will be the last one to load. The "not match" statement is regex which will prevent any statement containing a “##” string from being written to the /var/log/ltm log. The next filter,"f_local0_customlog", catches the "##" log statement and the remaining include statements handle the job of sending them to a new destination which is a file I chose to name "/var/log/customlog". You may be asking yourself why I chose to match the string ": ##" instead of just "##". It turns out that specifying just "##" also catches AUDIT log entries which (in my configuration) are written every time an iRule with the string "##" is modified. But only the log statement from the actual iRule itself will contain the ": ##" string. This slight tweak keeps those two entries separated from each other. So now I have a way to force my iRule logging statements to a custom log file. This is great, but how do I incorporate this custom log file into the log rotation scheme like most other log files? The answer is with a logrotate include statement: tmsh modify sys log-rotate syslog-include '" /var/log/customlog { compress missingok notifempty }"' and save the configuration change: tmsh save / sys config Logrotate is kicked off by cron, and the change should get picked up the next time it is scheduled to run. And that's it. I now have a way to force iRule log statements to a custom log file which is rotated just like every other log file. It’s important to note that you must save the configuration with "tmsh save / sys config" whenever you execute an include statement. If you don't, your changes will be lost then next time your configuration is loaded. That's why I think this solution is so great - it's visible in the bigip_sys.conf file -not like customizing configuration files directly. And it's portable.2.4KViews0likes8CommentsConverting a Cisco ACE configuration file to F5 BIG-IP Format
In September, Cisco announced that it was ceasing development and pulling back on sales of its Application Control Engine (ACE) load balancing modules. Customers of Cisco’s ACE product line will now have to look for a replacement product to solve their load balancing and application delivery needs. One of the first questions that will come up when a customer starts looking into replacement products surrounds the issue of upgradability. Will the customer be able to import their current configuration into the new technology or will they have to start with the new product from scratch. For smaller businesses, starting over can be a refreshing way to clean up some of the things you’ve been meaning to but weren’t able to for one reason or another. But, for a large majority of the users out there, starting over from nothing with a new product is a daunting task. To help with those users considering a move to the F5 universe, DevCentral has included several scripts to assist with the configuration migration process. In our Codeshare section we created some scripts useful in converting ACE configurations into their respective F5 counterparts. https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip https://devcentral.f5.com/s/articles/Cisco-ACE-to-F5-Conversion-Python-3 https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip-via-tmsh We also have scripts covering Cisco’s CSS (https://devcentral.f5.com/s/articles/cisco-css-to-f5-big-ip ) and CSM products (https://devcentral.f5.com/s/articles/cisco-csm-to-f5-big-ip ) as well. In this article, I’m going to focus on the ace2f5-tmsh” in the ace2f5.zip script library. The script takes as input an ACE configuration and creates a TMSH script to create the corresponding F5 BIG-IP objects. ace2f5-tmsh.pl $ perl ace2f5-tmsh.pl ace_config > tmsh_script We could leave it at that, but I’ll use this article to discuss the components of the ACE configuration and how they map to F5 objects. ip The ip object in the ACE configuration is defined like this: ip route 0.0.0.0 0.0.0.0 10.211.143.1 equates to a tmsh “net route” command. net route 0.0.0.0-0 { network 0.0.0.0/0 gw 10.211.143.1 } rserver An “rserver” is basically a node containing a server address including an optional “inservice” attribute indicating whether it’s active or not. ACE Configuration rserver host R190-JOEINC0060 ip address 10.213.240.85 rserver host R191-JOEINC0061 ip address 10.213.240.86 inservice rserver host R192-JOEINC0062 ip address 10.213.240.88 inservice rserver host R193-JOEINC0063 ip address 10.213.240.89 inservice It will be used to find the IP address for a given rserver hostname. serverfarm A serverfarm is a LTM pool except that it doesn’t have a port assigned to it yet. ACE Configuration serverfarm host MySite-JoeInc predictor hash url rserver R190-JOEINC0060 inservice rserver R191-JOEINC0061 inservice rserver R192-JOEINC0062 inservice rserver R193-JOEINC0063 inservice F5 Configuration ltm pool Insiteqa-JoeInc { load-balancing-mode predictive-node members { 10.213.240.86:any { address 10.213.240.86 }} members { 10.213.240.88:any { address 10.213.240.88 }} members { 10.213.240.89:any { address 10.213.240.89 }} } probe a “probe” is a LTM monitor except that it does not have a port. ACE Configuration probe tcp MySite-JoeInc interval 5 faildetect 2 passdetect interval 10 passdetect count 2 will map to the TMSH “ltm monitor” command. F5 Configuration ltm monitor Insiteqa-JoeInc { defaults from tcp interval 5 timeout 10 retry 2 } sticky The “sticky” object is a way to create a persistence profile. First you tie the serverfarm to the persist profile, then you tie the profile to the Virtual Server. ACE Configuration sticky ip-netmask 255.255.255.255 address source MySite-JoeInc-sticky timeout 60 replicate sticky serverfarm MySite-JoeInc class-map A “class-map” assigns a listener, or Virtual IP address and port number which is used for the clientside and serverside of the connection. ACE Configuration class-map match-any vip-MySite-JoeInc-12345 2 match virtual-address 10.213.238.140 tcp eq 12345 class-map match-any vip-MySite-JoeInc-1433 2 match virtual-address 10.213.238.140 tcp eq 1433 class-map match-any vip-MySite-JoeInc-31314 2 match virtual-address 10.213.238.140 tcp eq 31314 class-map match-any vip-MySite-JoeInc-8080 2 match virtual-address 10.213.238.140 tcp eq 8080 class-map match-any vip-MySite-JoeInc-http 2 match virtual-address 10.213.238.140 tcp eq www class-map match-any vip-MySite-JoeInc-https 2 match virtual-address 10.213.238.140 tcp eq https policy-map a policy-map of type loadbalance simply ties the persistence profile to the Virtual . the “multi-match” attribute constructs the virtual server by tying a bunch of objects together. ACE Configuration policy-map type loadbalance first-match vip-pol-MySite-JoeInc class class-default sticky-serverfarm MySite-JoeInc-sticky policy-map multi-match lb-MySite-JoeInc class vip-MySite-JoeInc-http loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-https loadbalance vip inservice loadbalance vip icmp-reply class vip-MySite-JoeInc-12345 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-31314 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-1433 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class reals nat dynamic 1 vlan 240 class vip-MySite-JoeInc-8080 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply F5 Configuration ltm virtual vip-Insiteqa-JoeInc-12345 { destination 10.213.238.140:12345 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-1433 { destination 10.213.238.140:1433 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-31314 { destination 10.213.238.140:31314 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-8080 { destination 10.213.238.140:8080 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-http { destination 10.213.238.140:http pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} http {} } } ltm virtual vip-Insiteqa-JoeInc-https { destination 10.213.238.140:https profiles { tcp {} } Conclusion If you are considering migrating from Cicso’s ACE to F5, I’d consider you take a look at the Cisco conversion scripts to assist with the conversion.2.5KViews0likes6CommentsTroubleshooting TLS Problems With ssldump
Introduction Transport Layer Security (TLS) is used to secure network communications between two hosts. TLS largely replaced SSL (Secure Sockets Layer) starting in 1999, but many browsers still provide backwards compatibility for SSL version 3. TLS is the basis for securing all HTTPS communications on the Internet. BIG-IP provides the benefit of being able to offload the encryption and decryption of TLS traffic onto a purpose specific ASIC. This provides performance benefits for the application servers, but also provides an extra layer for troubleshooting when problems arise. It can be a daunting task to tackle a TLS issue with tcpdump alone. Luckily, there is a utility called ssldump. Ssldump looks for TLS packets and decodes the transactions, then outputs them to the console or to a file. It will display all the components of the handshake and if a private key is provided it will also display the encrypted application data. The ability to fully examine communications from the application layer down to the network layer in one place makes troubleshooting much easier. Note: The user interface of the BIG-IP refers to everything as SSL with little mention of TLS. The actual protocol being negotiated in these examples is TLS version 1.0, which appears as “Version 3.1” in the handshakes. For more information on the major and minor versions of TLS, see the TLS record protocol section of the Wikipedia article. Overview of ssldump I will spare you the man page, but here are a few of the options we will be using to examine traffic in our examples: ssldump -A -d -k <key file> -n -i <capture VLAN> <traffic expression> -A Print all fields -d Show application data when private key is provided via -k -k Private key file, found in /config/ssl/ssl.key/; the key file can be located under client SSL profile -n Do not try to resolve PTR records for IP addresses -i The capture VLAN name is the ingres VLAN for the TLS traffic The traffic expression is nearly identical to the tcpdump expression syntax. In these examples we will be looking for HTTPS traffic between two hosts (the client and the LTM virtual server). In this case, the expression will be "host <client IP> and host <virtual server IP> and port 443”. More information on expression syntax can be found in the ssldump and tcpdump manual pages. *the manual page can be found by typing 'man ssldump' or online here <http://www.rtfm.com/ssldump/Ssldump.html> A healthy TLS session When we look at a healthy TLS session we can see what things should look like in an ideal situation. First the client establishes a TCP connection to the virtual server. Next, the client initiates the handshake with a ClientHello. Within the ClientHello are a number of parameters: version, available cipher suites, a random number, and compression methods if available. The server then responds with a ServerHello in which it selects the strongest cipher suite, the version, and possibly a compression method. After these parameters have been negotiated, the server will send its certificate completing the the ServerHello. Finally, the client will respond with PreMasterSecret in the ClientKeyExchange and each will send a 1 byte ChangeCipherSpec agreeing on their symmetric key algorithm to finalize the handshake. The client and server can now exchange secure data via their TLS session until the connection is closed. If all goes well, this is what a “clean” TLS session should look like: New TCP connection #1: 10.0.0.10(57677) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>S Handshake ClientHello Version 3.1 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [more cipher suites] TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 2 0.0012 (0.0001) S>C Handshake ServerHello Version 3.1 session_id[0]= cipherSuite TLS_RSA_WITH_AES_256_CBC_SHA compressionMethod NULL 1 3 0.0012 (0.0000) S>C Handshake Certificate 1 4 0.0012 (0.0000) S>C Handshake ServerHelloDone 1 5 0.0022 (0.0010) C>S Handshake ClientKeyExchange 1 6 0.0022 (0.0000) C>S ChangeCipherSpec 1 7 0.0022 (0.0000) C>S Handshake Finished 1 8 0.0039 (0.0016) S>C ChangeCipherSpec 1 9 0.0039 (0.0000) S>C Handshake Finished 1 10 0.0050 (0.0010) C>S application_data 1 0.0093 (0.0000) S>C TCP FIN 1 0.0093 (0.0000) C>S TCP FIN Scenario 1: Virtual server missing a client SSL profile The client SSL profile defines what certificate and private key to use, a key passphrase if needed, allowed ciphers, and a number of other options related to TLS communications. Without a client SSL profile, a virtual server has no knowledge of any of the parameters necessary to create a TLS session. After you've configured a few hundred HTTPS virtuals this configuration step becomes automatic, but most of us mortals have missed step at one point or another and left ourselves scratching our heads. We'll set up a test virtual that has all the necessary configuration options for an HTTPS profile, except for the omission of the client SSL profile. The client will open a connection to the virtual on port 443, a TCP connection will be established, and the client will send a 'ClientHello'. Normally the server would then respond with ServerHello, but in this case there is no response and after some period of time (5 minutes is the default timeout for the browser) the connection is closed. This is what the ssldump would look like for a missing client SSL profile: New TCP connection #1: 10.0.0.10(46226) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>SV3.1(84) Handshake ClientHello Version 3.1 random[32]= 4c b6 3b 84 24 d7 93 7f 4b 09 fa f1 40 4f 04 6e af f7 92 e1 3b a7 3a c2 70 1d 34 dc 9d e5 1b c8 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [a number of other cipher suites] TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 299.9883 (299.9871) C>S TCP FIN 1 299.9883 (0.0000) S>C TCP FIN Scenario 2: Client and server do not share a common cipher suite This is a common scenario when really old browsers try to connect to servers with modern cipher suites. We have purposely configured our SSL profile to only accept one cipher suite (TLS_RSA_WITH_AES_256_CBC_SHA in this case). When we try connect to the virtual using a 128-bit key, the connection is immediately closed with no ServerHello from the virtual server. The differentiator here, while small, is the quick closure of the connection and the ‘TCP FIN’ that arises from the server. This is unlike the behavior of the missing SSL profile, because the server initiates the connection teardown and there is no connection timeout. The differences, while subtle, hint at the details of the problem: New TCP connection #1: 10.0.0.10(49342) <-> 10.0.0.20(443) 1 1 0.0010 (0.0010) C>SV3.1(48) Handshake ClientHello Version 3.1 random[32]= 4c b7 41 87 e3 74 88 ac 89 e7 39 2d 8c 27 0d c0 6e 27 da ea 9f 57 7c ef 24 ed 21 df a6 26 20 83 cipher suites TLS_RSA_WITH_AES_128_CBC_SHA Unknown value 0xff compression methods unknown value NULL 1 0.0011 (0.0000) S>C TCP FIN 1 0.0022 (0.0011) C>S TCP FIN Conclusion Troubleshooting TLS can be daunting at first, but an understanding of the TLS handshake can make troubleshooting much more approachable. We cannot exhibit every potential problem in this tech tip. However, we hope that walking through some of the more common examples will give you the tools necessary to troubleshoot other issues as they arise. Happy troubleshooting!7.8KViews0likes5CommentsDeploying BIG-IP VE in VMware vCloud Director
Beginning with BIG-IP version 11.2, you may have noticed a new package in the Virtual Edition downloads folder for vCloud Director 1.5. VMware’s vCloud Director is a software solution enabling enterprises to build multi-tenant private clouds. Each virtual datacenter has its own resource set of cpu, memory, and disk that the vDC owner can allocate as necessary. F5 DevCentral is now running in these virtual datacenter configurations (as announced June 13th, 2012), with full BIG-IP VE infrastructure in place. This article will describe the deployment process to get BIG-IP VE installed and running in the vCloud Director environment. Uploading the vCloud Image The upload process is fairly simple, but it does take a while. First, after logging in to the vCloud interface, click catalogs, then select your private catalog. Once in the private catalog, click the upload button highlighted below. This will launch a pop up. Make sure the vCloud zip file has been extracted. When the .ovf is selected in this screen, it will grab that as well as the disk file after clicking upload. Now get a cup of coffee. Or a lot of them, this takes a while. Deploying the BIG-IP VE OVF Template Now that the image is in place, click on my cloud at the top navigation, select vApps, then select the plus sign, which will create a new vApp. (Or, the BIG-IP can be deployed into an existing vApp as well.) Select the BIG-IP VE template (bigip11_2 in the screenshot below) and click next. Give the vApp a name and click next. Accept the F5 EULA and click next. At this point, give the VM a full name and a computer name and click finish. I checked the network adapter box to show the network adapter type. It is not configurable at this point, and the flexible NIC is not the right one. After clicking finish, the system will create the vApp and build the VM, so maybe it’s time for another cup of coffee. Once the build is complete, click into the vapp_test vApp. Right-click on the testbigip-11-2 VM and select properties. Do NOT power on the VM yet! CPU and memory should not be altered. More CPU won’t help TMM, there is no CMP yet in the virtual edition and one extra CPU for system stuff is sufficient. TMM can’t schedule more than 4G of RAM either. Click the “Show network adapter type” and again you’ll notice the NICs are not correct. Delete all the network interfaces, then re-add one at a time as many (up to 10 in vCloud Director) NICs as is necessary for your infrastructure. To add a NIC, just click the add button and then select the network dropdown and select Add Network. At this point, you’ll need to already have a plan for your networking infrastructure. Organizational networks are usable in and between all vApps, whereas vApp networks are isolated to just that instance. I’ll show organizational network configuration in this article. Click Organization network and then click next. Select the appropriate network and click next. I’ve selected the Management network. For the management NIC I’ll leave the adapter type as E1000. The IP Mode is useful for systems where guest customization is enabled, but is still a required setting. I set it to Static-Manual and enter the self IP addresses assigned to those interfaces. This step is still required within the F5, it will not auto-configure the vlans and self IPs for you. For the remaining NICs that you add, make sure to set the adapter type to VMXNET 3. Then click OK to apply the new NIC configurations. *Note that adding more than 5 NICs in VE might cause the interfaces to re-order internally. If this happens, you’ll need to map the mac address in vCloud to the mac addresses reported in tmsh and adjust your vlans accordingly. Powering Up! After the configuration is updated, right-click on the testbigip-11-2 VM and select power on. After the VM powers on, BIG-IP VE will boot. Login with root/default credentials and type config at the prompt to set the management ip and netmask. Select No on auto-configuration Set the IP address. Then set the netmask. I selected no on the default route, but it might be necessary depending on the infrastructure you have in place. Finally, accept the settings. At this point, the system should be available on the management network. I have a linux box on that network as well so I can ssh into the BIG-IP VE to perform the licensing steps as the vCloud Director console does not support copy/paste.300Views0likes2CommentsProblems Overcome During a Major LTM Software/Hardware Upgrade
I recently completed a successful major LTM hardware and software migration which accomplished two high-level goals: · Software upgrade from v9.3.1HF8 to v10.1.0HF1 · Hardware platform migration from 6400 to 6900 I encountered several problems during the migration event that would have stopped me in my trackshad I not (in most cases) encountered them already during my testing. This is a list of those issues and what I did to address them. While I may not have all the documentation about these problems or even fully understand all the details, the bottom line is that they worked. My hope is that someone else will benefit from it when it counts the most (and you know what Imean). Problem #1 – Unable to Access the Configuration Utility (admin GUI) The first issue I had to resolve was apparent immediately after the upgrade finished. When I tried to access the Configuration utility, I was denied: Access forbidden! You don't have permission to access the requested object. Error 403 I happened to find the resolution in SOL7448: Restricting access to the Configuration utility by source IP address. The SOL refers to bigpipe commands, which is what I used initially: bigpipe httpd allow all add bigpipe save Since then, I’ve developed the corresponding TMSH commands, which is F5’s long-term direction toward managing the system: tmsh modify sys httpd allow replace-all-with {all} tmsh save / sys config Problem #2 – Incompatible Profile I encountered the second issue after the upgraded configuration was loaded for the first time: [root@bigip2:INOPERATIVE] config # BIGpipe unknown operation error: 01070752:3: Virtual server vs_0_0_0_0_22 (forwarding type) has an incompatible profile. By reviewing the /config/bigip.conf file, I found that my forwarding virtual servers had a TCP profile applied: virtual vs_0_0_0_0_22 { destination any:22 ip forward ip protocol tcp translate service disable profile custom_tcp } Apparently v9 did not care about this, but v10 would not load until I manually removed these TCP profile referencesfrom all of my forwarding virtual servers. Problem #3 – BIGpipe parsing error Then I encountered a second problem while attempting to load the configuration for the first time: BIGpipe parsing error (/config/bigip.conf Line 6870): 012e0022:3: The requested value (x.x.x.x:3d-nfsd {) is invalid (show | <pool member list> | none) [add | delete]) for 'members' in 'pool' While examining this error, I noticed that the port number was translated into a service name – “3d-nfsd”. Fortunately during my initial v10 research, I came across SOL11293 - The default /etc/services file in BIG-IP version 10.1.0 contains service names that may cause a configuration load failure. While I had added a step in my upgrade process to prevent the LTM from service translation, it was notscheduled until after the configuration had been successfully loaded on the new hardware. Instead I had to move this step up in the overall process flow: bigpipe cli service number b save The corresponding TMSH commands are: tmsh modify cli global-settings service number tmsh save / sys config Problem #4 – Command is not valid in current event context This was the final error we encountered when trying to load the upgraded configuration for the first time: BIGpipe rule creation error:01070151:3: Rule [www.mycompany.com] error: line 28: [command is not valid in current event context (HTTP_RESPONSE)] [HTTP::host] While reviewing the iRule it was obvious that we had a statement which didn’t make any sense, since there is no Host header in an HTTP response. Apparently it didn’t bother v9, but v10 didn’t like it: when HTTP_RESPONSE { switch -glob[string tolower [HTTP::host]] { <do some stuff> } } We simply removed that event from the iRule. Problem #5: Failed Log Rotation After I finished my first migration, I found myself in a situation where none of the logs in the /var/log directory were not being rotated. The /var/log/secure log file held the best clue about the underlying issue: warning crond[7634]: Deprecated pam_stack module called from service "crond" I had to open a case with F5, who found that the PAM crond configuration file (/config/bigip/auth/pam.d/crond) had been pulled from the old unit: # # The PAM configuration file for the cron daemon # # auth sufficient pam_rootok.so auth required pam_stack.so service=system-auth auth required pam_env.so account required pam_stack.so service=system-auth session required pam_limits.so #session optional pam_krb5.so I had to update the file from a clean unit (which I was fortunate enough to have at my disposal): # # The PAM configuration file for the cron daemon # # auth sufficient pam_rootok.so auth required pam_env.so auth include system-auth account required pam_access.so account sufficient pam_permit.so account include system-auth session required pam_loginuid.so session include system-auth and restart crond: bigstart restart crond or in the v10 world: tmsh restart sys service crond Problem #6: LTM/GTM SSL Communication Failure This particular issue is the sole reason that my most recent migration process took 10 hours instead of four. Even if you do have a GTM, you are not likely to encounter it since it was a result of our own configuration. But I thought I’d include it since it isn’t something you’ll see documented by F5. One of the steps in my migration plan was to validate successful LTM/GTM communication with iqdump. When I got to this point in the migration process, I found that iqdump was failing in both directions because of SSL certificate verification despite having installed the new Trusted Server Certificate on the GTM, and Trusted Device Certificates on both the LTM and GTM. After several hours of troubleshooting, I decided to perform a tcpdump to see if I could gain any insight based on what was happening on the wire. I didn’t notice it at first, but when I looked at the trace again later I noticed the hostname on the certificate that the LTM was presenting was not correct. It was a very small detail that could have easily been missed, but was the key in identifying the root cause. Having dealt with Device Certificates in the past, I knew that the Device Certificate file was /config/httpd/conf/ssl.crt/server.crt. When I looked in that directory on the filesystem, there I found a number of certificates (and subsequently, private keys in /config/httpd/conf/ssl.key) that should not have been there. I also found that these certificates and keys were pulled from the configuration on the old hardware. So I removed the extraneous certificates and keys from these directories and restarted the httpd service (“bigstart restart httpd”, or “tmsh restart sys service crond”). After I did that, the LTM presented the correct Device Certificate and LTM/GTM communication was restored. I'm still not sure to this day how those certificates got there in the first place...848Views0likes3CommentsQuick Start: Application Delivery Fundamentals
On DevCentral we often focus on the out of the box solutions. iRules, iControl, iApps and more are fantastic and exciting technologies, but there's a lot that goes into making an F5 device work before you even get to play with some of those more advanced features. Things like configuring a pool or a virtual server often times get taken for granted on DevCentral, and that is something we'd like to change. The reality is there are many new users coming to the world of F5 all the time, and not everyone is an expert. Not only is the user base expanding so is the feature base with every outgoing release. As we add more features and more users that are still cutting their teeth, it becomes significantly more important to continue educating not only the advanced, but also the newly indoctrinated. As a means of attempting to dig into the more basic, entry level concepts and knowledge required to successfully navigate the waters of a freshly licensed F5 device, let's take a look at a plausible, and frankly quite common, scenario: Sam is a network admin. Sam works with many products, from many vendors, and is constantly expanding his skill set. As such he never quite knows what he will be doing from day to day. As it turns out, today his manager has tasked him with migrating from a technology that is no longer supported, and as such their company is moving away from, to a newer, currently supported platform. That platform happens to be F5, with which Sam's experience can be tidily summed up as "Open box. Plug in.". This means Sam has some learning to do, and needs to know where to start. This raises the question of where to start. First things first, he would need to gather the appropriate information for the application. Things such as site IP, site name, server IP space, and relevant VLANS required are necessary before rolling up his sleeves and getting started. This information would look like: Once he has this information, it's time to log into the F5 device, with his freshly changed admin password, and start the configuration process. Ideally he has followed the configuration wizard and has the management configuration completed, which allows him to move on to configuring the production components. Upon logging into the device he knows his first task needs to be to get things talking, which means, in this day and age of VLAN separated links, he'll be starting with the VLANS. An IP address can live near anywhere, a server can't communicate to anyone without the path laid out, and that means the VLANs are the most logical place to start. To create a VLAN he would navigate in the GUI to Network -> VLANs, and here he'll be able to create whatever he needs. All he needs to configure is the name, a description of what this VLAN is used for, a VLAN tag (which is technically optional, but in a VLAN separated network will be required) and the F5 interface on which the traffic is coming in. (E.G. where's the cable?) In Sam's case, as you can probably tell from the information above, he will need three separate VLANs to support the different networks required for his deployment. Once he has this simple task completed, he'll be able to move on from VLANs to the next logical step in the configuration which is to define a self IP address on the F5 device. The Self IP is the IP address of the F5 device where it lives on each network. These addresses allow it to communicate with each of the defined networks once they are configured and applied to the appropriate VLAN. Since we have three networks to communicate with, and three VLANs to represent those three networks, we'll use three Self IP addresses, one for each VLAN that the F5 device is able to route to. To create a Self IP Sam would just navigate to Network -> Self IPs and again select "create". In this screen he'll assign a Name, an IP address and a Netmask, then select the appropriate VLAN for each IP address. Once this is finished his box is now functionally routable across all three networks. To double check that all is working as intended he can easily ssh into the box and send a few pings out across the different networks. Assuming all is well, this means it's time to start configuring his application objects. At this point Sam wants to begin defining his servers. Within the F5 there are a couple of ways to do this. He could begin creating nodes and directly defining server addresses to be used later in the deployment. However, in this case, it is more efficient for him to begin at the pool level because by going through the pool creation process he will be defining the nodes when he creates his pool members. Nodes are directly configured server objects. They are merely an IP address that defines a server. A pool member, however, is an IP:port combination that defines a destination and is also tied to a particular pool. The pool is a collection of pool members that essentially serve the application. The pool level is where things such as the load balancing method, pool monitors and more are configured. This is the object that will effectively be the internal destination of the inbound traffic for this application. To create a pool, and subsequent pool members, Sam would navigate to Local Traffic -> Pools and again select create. At this screen he will assign a pool name, which is required, and will have the option of further configuring the pool with such items as a description, health monitors, load balancing method and more. In Sam's case he defines his first pool as pool1. Inside of pool1, under resources, he begins to add in his server objects and service port. This is the port on which the servers will be listening. Once he's added the appropriate new members it's time to make a decision on which load balancing method he'll be using for this pool. Load balancing is simply the concept of distributing server traffic amongst multiple pool members (servers) via a pre-defined algorithm. When it comes to load balancing on an F5 device, there are several methods available. Ranging from simple and classic, such as Round Robin, to far more advanced, like priority group activation and weighted least connections; there is no shortage of ways to slice and dice traffic. In Sam's case he's just interested in a basic traffic distribution, which leads him to choose Round Robin, which will evenly balance traffic amongst pool members. The only concern in this scenario is the possibility of one of said pool members being unavailable while being sent traffic. To guard against this it's time for Sam to configure a health monitor. A health monitor is a scheduled check on resource availability. This could be a simple ICMP request, or something like a more advanced monitor that makes a specific request and evaluates resource health based on the response. Basic HTTP monitors are already defined on the F5 device. To attach this monitor you would simply go back to the pool that was just created and in the "health monitors" section move the desired monitor, in this case HTTP, to active. Sam has a different requirement. He wants to be able to perform a query against the web based application in his pool and ensure that appropriate response data is received. This is easily accomplished on the F5 device, as you can see below: 1. In the Local Traffic –> Monitors section hit create 2. Name it, and select HTTP from the type drop down 3. Create a send string. This is the query the monitor is going to send to the server. In this case, Sam sends a GET /server-status.html 4. Create a Receive String. This is what the monitor needs to find in the response for it to consider the resource available. Sam used "Server ok” here. 5. Hit create, and the new monitor is now able to be attached to the pool. So at this point the device now has a defined traffic destination on the server side,but clients are still unable to connect. This is because while the pool defines the server side destination for traffic, there still needs to be a client side destination so the F5 device knows to listen for client connections on the desired IP:port. To accomplish this Sam needs to create a Virtual Server. A Virtual Server (Or VIP) is a client side IP:port combination that allows a client to connect to one of the resources behind the device. Without a Virtual Server, no client connection can be established. To configure a Virtual Server Sam would navigate to Local Traffic -> Virtual Servers and once again select create. On the Virtual Server creation screen there are many options to customize your application deployment to fit your particular needs. The core information required to configure a Virtual is a name for the virtual, a destination (IP address), and a service port (the port on which the client will connect). This would get you a basic TCP Virtual Server with no bells and whistles. In Sam's case, however, he is dealing with HTTP traffic, which means he'll want to go into the Virtual's configuration and select the profile named "http" under the "HTTP Profile" section. This will enable HTTP parsing and optimization at the Virtual layer. A profile within the F5 device is a way to create an abstraction layer between configuration objects, and configuration options. This allows a user to create a customized set of options, for instance an HTTP profile that handles traffic in the specific way they desire, and to re-use those options easily across multiple objects, I.E. Virtual Servers. After assigning this profile Sam is ready to begin testing traffic being passed to his application. Unfortunately, there's a problem. While the connections are being established correctly and data appears to be passing through the F5 device to the servers, the responses from the servers, which a simple TCP dump will show are leaving the servers, never seem to arrive back at the client. Some further analysis shows that the requests received by the server still has the client's IP as the source. This is problematic, because when the server attempts to respond using its own IP as the source. This means even if the response gets routed directly to the client, the client will reject the response as it is expecting the response to originate from the address it used as the destination of the request. This issue can be seen below: Client: 10.10.2.10 F5 VS: 10.10.2.30 (no SNAT) Server: 10.10.2.46:80 The image shows 3 concurrent captures, taken on the client, F5, and the server. You can see under the yellow circles were the connection goes wrong. The connection comes in to the F5, from the client. The F5 completes the handshake and begins the handshake with the Server. The server, seeing the source IP as 10.10.2.10, sends it’s Syn/Ack directly back to the client. The client gets it, RST’s, because it, from it’s point of reference has already completed the handshake it wanted to. The new Syn/Ack doesn’t match anything in it’s current network stack. To solve this issue all Sam needs to do is ensure that all traffic returning to the client originates from the same IP address that the client used for the request. The easiest way to do this is to enable the SNAT Pool feature within the Virtual Server. This will re-write the source address of both the request from client to the server, and the response from the server to the client. This is important to ensure that the traffic for both the request and response traverses the F5 device. In the case of auto map SNAT it will automatically use the self IP on the appropriate VLAN for traffic bound to the pool members, and the Virtual Server's external IP for traffic bound to the client. For clarification see the image below: On the top portion of the drawing we can see the asymmetric network path. The client sends a request through the F5 virtual, but without SNAT, the server attempts to respond back straight to the client. The client at that point is not listening for a response from the servers source IP, therefore it just drops it. The bottom drawing shows what automap SNAT does. The source IP’s are adjusted to ensure that all traffic going to and from the server through the F5, traverses the F5. Bam, problem solved After this small configuration change and some further testing Sam will find that traffic is now flowing as expected and the application is serving content. With this, Sam's task of transitioning an application deployment to the F5 device is completed. The above scenario is common, but obviously not overly complex. There are many further options, toys and tricks available to you when configuring your device. This, however, should get you from a new box to passing traffic without much hassle. For further options and more advanced scenarios dig into the Advanced Design & Config section on DevCentral, as well as the rest of what the community has to offer. This article was a collaboration with Devcentral's Josh Michaels and Colin Walker. ENJOY!357Views0likes3CommentsRewriting Redirects
While best practices for virtualized web applications may indicate that relative self-referencing links and redirects (those which don't include the protocol or the hostname) are preferable to absolute ones (those which do), many applications load balanced by our gear still send absolute self-references. This drives a fairly common requirement when proxying or virtualizing HTTP applications: To manipulate any redirects the servers may set such that they fully support the intended proxy or virtualization scheme. In some cases the requirement is as simple as changing "http://" to "https://" in every redirect the server sends because it is unaware of SSL offloading. Other applications or environments may require modifications to the host, URI, or other headers. LTM provides a couple of different ways to manage server-set redirects appropriately. HTTP profile option: "Rewrite Redirects" The LTM http profile contains the "Rewrite Redirects" option which supports rewriting server-set redirects to the https protocol with a hostname matching the one requested by the client. The possible settings for the option are "All", "Matching", Node", and "None". Rewrite Redirects settings for http profile Setting Effect Resulting Redirect Use Case All Rewrites all HTTP 301, 302, 303, 305, or 307 redirects https://<requested_hostname>/<requested_uri> Use "All" if all redirects are self-referencing and the applicaiton is intended to be secure throughout. You should also use "All" if your application is intended to be secure throughout, even if redirected to another hostname. Matching Rewrites redirects when the request and the redirect are identical except for a trailing slash. See K14775 . https://<requested_hostname>/<requested_uri>/ Use "Matching" to rewrite only courtesy redirects intended to append a missing trailing slash to a directory request. Node Rewrites all redirects containing pool member IP addresses instead of FQDN https://<vs_address>/<requested_uri> If your servers send redirects that include the server's own IP address instead of a hostname. None No redirects are rewritten N/A Default Setting Note that all options will rewrite the specified redirects to HTTPS, so there must be an HTTPS virtual enabled on the same address as the HTTP virtual server. iRule Options While these options cover a broad range of applications, they may not be granular enough to meet your needs. For example, you might only want to re-write the hostname, not the protocol, to support HTTP-only proxying scenarios. You might need it to temporarily work around product issues such as those noted in SOL8535/CR89873 . In these cases, you can use an iRule that uses the HTTP::is_redirect command to identify server-set redirects and selectively rewrite any part of the Location header using the HTTP::header command with the "replace" option. Here's an iRule that rewrites just one specific hostname to another, preserving the protocol scheme and URI as set by the server: when HTTP_RESPONSE { if { [HTTP::is_redirect] }{ HTTP::header replace Location [string map {"A.internal.com" "X.external.com"} [HTTP::header Location]] } } Here's one that rewrites both relative and absolute redirects to absolute HTTPS redirects, inserting the requested hostname when re-writing the relative redirect to absolute: when HTTP_REQUEST { # save hostname for use in response set fqdn_name [HTTP::host] } when HTTP_RESPONSE { if { [HTTP::is_redirect] }{ if { [HTTP::header Location] starts_with &quot;/&quot; }{ HTTP::header replace Location &quot;https://$fqdn_name[HTTP::header Location]&quot; } else { HTTP::header replace Location &quot;[string map {&quot;http://&quot; &quot;https://&quot;} [HTTP::header Location]]&quot; } } } The string map example could quite easily be adjusted or extended to meet just about any redirect rewriting need you might encounter. (The string map command will accept multiple replacement pairs which can come in handy if multiple hostnames or directory strings need to be re-written -- in many cases you can perform the intended replacements with a single string map command.) Taking it a step further As I mentioned earlier, redirects are only one place server self-references may be found. If absolute self-referencing links are embedded in the HTTP payload, you may need to build and apply a stream profile to perform the appropriate replacements. An iRule could also be used for more complex payload replacements if necessary. For the ultimate in redirect rewriting and all other things HTTP proxy, I direct your attention to the legendary ProxyPass iRule contributed to the DevCentral codeshare by Kirk Bauer (thanks, Kirk, for a very comprehensive & instructive example!)13KViews0likes9Comments