ace
13 TopicsRewriting Redirects
While best practices for virtualized web applications may indicate that relative self-referencing links and redirects (those which don't include the protocol or the hostname) are preferable to absolute ones (those which do), many applications load balanced by our gear still send absolute self-references. This drives a fairly common requirement when proxying or virtualizing HTTP applications: To manipulate any redirects the servers may set such that they fully support the intended proxy or virtualization scheme. In some cases the requirement is as simple as changing "http://" to "https://" in every redirect the server sends because it is unaware of SSL offloading. Other applications or environments may require modifications to the host, URI, or other headers. LTM provides a couple of different ways to manage server-set redirects appropriately. HTTP profile option: "Rewrite Redirects" The LTM http profile contains the "Rewrite Redirects" option which supports rewriting server-set redirects to the https protocol with a hostname matching the one requested by the client. The possible settings for the option are "All", "Matching", Node", and "None". Rewrite Redirects settings for http profile Setting Effect Resulting Redirect Use Case All Rewrites all HTTP 301, 302, 303, 305, or 307 redirects https://<requested_hostname>/<requested_uri> Use "All" if all redirects are self-referencing and the applicaiton is intended to be secure throughout. You should also use "All" if your application is intended to be secure throughout, even if redirected to another hostname. Matching Rewrites redirects when the request and the redirect are identical except for a trailing slash. See K14775 . https://<requested_hostname>/<requested_uri>/ Use "Matching" to rewrite only courtesy redirects intended to append a missing trailing slash to a directory request. Node Rewrites all redirects containing pool member IP addresses instead of FQDN https://<vs_address>/<requested_uri> If your servers send redirects that include the server's own IP address instead of a hostname. None No redirects are rewritten N/A Default Setting Note that all options will rewrite the specified redirects to HTTPS, so there must be an HTTPS virtual enabled on the same address as the HTTP virtual server. iRule Options While these options cover a broad range of applications, they may not be granular enough to meet your needs. For example, you might only want to re-write the hostname, not the protocol, to support HTTP-only proxying scenarios. You might need it to temporarily work around product issues such as those noted in SOL8535/CR89873 . In these cases, you can use an iRule that uses the HTTP::is_redirect command to identify server-set redirects and selectively rewrite any part of the Location header using the HTTP::header command with the "replace" option. Here's an iRule that rewrites just one specific hostname to another, preserving the protocol scheme and URI as set by the server: when HTTP_RESPONSE { if { [HTTP::is_redirect] }{ HTTP::header replace Location [string map {"A.internal.com" "X.external.com"} [HTTP::header Location]] } } Here's one that rewrites both relative and absolute redirects to absolute HTTPS redirects, inserting the requested hostname when re-writing the relative redirect to absolute: when HTTP_REQUEST { # save hostname for use in response set fqdn_name [HTTP::host] } when HTTP_RESPONSE { if { [HTTP::is_redirect] }{ if { [HTTP::header Location] starts_with &quot;/&quot; }{ HTTP::header replace Location &quot;https://$fqdn_name[HTTP::header Location]&quot; } else { HTTP::header replace Location &quot;[string map {&quot;http://&quot; &quot;https://&quot;} [HTTP::header Location]]&quot; } } } The string map example could quite easily be adjusted or extended to meet just about any redirect rewriting need you might encounter. (The string map command will accept multiple replacement pairs which can come in handy if multiple hostnames or directory strings need to be re-written -- in many cases you can perform the intended replacements with a single string map command.) Taking it a step further As I mentioned earlier, redirects are only one place server self-references may be found. If absolute self-referencing links are embedded in the HTTP payload, you may need to build and apply a stream profile to perform the appropriate replacements. An iRule could also be used for more complex payload replacements if necessary. For the ultimate in redirect rewriting and all other things HTTP proxy, I direct your attention to the legendary ProxyPass iRule contributed to the DevCentral codeshare by Kirk Bauer (thanks, Kirk, for a very comprehensive & instructive example!)13KViews0likes9CommentsTroubleshooting TLS Problems With ssldump
Introduction Transport Layer Security (TLS) is used to secure network communications between two hosts. TLS largely replaced SSL (Secure Sockets Layer) starting in 1999, but many browsers still provide backwards compatibility for SSL version 3. TLS is the basis for securing all HTTPS communications on the Internet. BIG-IP provides the benefit of being able to offload the encryption and decryption of TLS traffic onto a purpose specific ASIC. This provides performance benefits for the application servers, but also provides an extra layer for troubleshooting when problems arise. It can be a daunting task to tackle a TLS issue with tcpdump alone. Luckily, there is a utility called ssldump. Ssldump looks for TLS packets and decodes the transactions, then outputs them to the console or to a file. It will display all the components of the handshake and if a private key is provided it will also display the encrypted application data. The ability to fully examine communications from the application layer down to the network layer in one place makes troubleshooting much easier. Note: The user interface of the BIG-IP refers to everything as SSL with little mention of TLS. The actual protocol being negotiated in these examples is TLS version 1.0, which appears as “Version 3.1” in the handshakes. For more information on the major and minor versions of TLS, see the TLS record protocol section of the Wikipedia article. Overview of ssldump I will spare you the man page, but here are a few of the options we will be using to examine traffic in our examples: ssldump -A -d -k <key file> -n -i <capture VLAN> <traffic expression> -A Print all fields -d Show application data when private key is provided via -k -k Private key file, found in /config/ssl/ssl.key/; the key file can be located under client SSL profile -n Do not try to resolve PTR records for IP addresses -i The capture VLAN name is the ingres VLAN for the TLS traffic The traffic expression is nearly identical to the tcpdump expression syntax. In these examples we will be looking for HTTPS traffic between two hosts (the client and the LTM virtual server). In this case, the expression will be "host <client IP> and host <virtual server IP> and port 443”. More information on expression syntax can be found in the ssldump and tcpdump manual pages. *the manual page can be found by typing 'man ssldump' or online here <http://www.rtfm.com/ssldump/Ssldump.html> A healthy TLS session When we look at a healthy TLS session we can see what things should look like in an ideal situation. First the client establishes a TCP connection to the virtual server. Next, the client initiates the handshake with a ClientHello. Within the ClientHello are a number of parameters: version, available cipher suites, a random number, and compression methods if available. The server then responds with a ServerHello in which it selects the strongest cipher suite, the version, and possibly a compression method. After these parameters have been negotiated, the server will send its certificate completing the the ServerHello. Finally, the client will respond with PreMasterSecret in the ClientKeyExchange and each will send a 1 byte ChangeCipherSpec agreeing on their symmetric key algorithm to finalize the handshake. The client and server can now exchange secure data via their TLS session until the connection is closed. If all goes well, this is what a “clean” TLS session should look like: New TCP connection #1: 10.0.0.10(57677) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>S Handshake ClientHello Version 3.1 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [more cipher suites] TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 2 0.0012 (0.0001) S>C Handshake ServerHello Version 3.1 session_id[0]= cipherSuite TLS_RSA_WITH_AES_256_CBC_SHA compressionMethod NULL 1 3 0.0012 (0.0000) S>C Handshake Certificate 1 4 0.0012 (0.0000) S>C Handshake ServerHelloDone 1 5 0.0022 (0.0010) C>S Handshake ClientKeyExchange 1 6 0.0022 (0.0000) C>S ChangeCipherSpec 1 7 0.0022 (0.0000) C>S Handshake Finished 1 8 0.0039 (0.0016) S>C ChangeCipherSpec 1 9 0.0039 (0.0000) S>C Handshake Finished 1 10 0.0050 (0.0010) C>S application_data 1 0.0093 (0.0000) S>C TCP FIN 1 0.0093 (0.0000) C>S TCP FIN Scenario 1: Virtual server missing a client SSL profile The client SSL profile defines what certificate and private key to use, a key passphrase if needed, allowed ciphers, and a number of other options related to TLS communications. Without a client SSL profile, a virtual server has no knowledge of any of the parameters necessary to create a TLS session. After you've configured a few hundred HTTPS virtuals this configuration step becomes automatic, but most of us mortals have missed step at one point or another and left ourselves scratching our heads. We'll set up a test virtual that has all the necessary configuration options for an HTTPS profile, except for the omission of the client SSL profile. The client will open a connection to the virtual on port 443, a TCP connection will be established, and the client will send a 'ClientHello'. Normally the server would then respond with ServerHello, but in this case there is no response and after some period of time (5 minutes is the default timeout for the browser) the connection is closed. This is what the ssldump would look like for a missing client SSL profile: New TCP connection #1: 10.0.0.10(46226) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>SV3.1(84) Handshake ClientHello Version 3.1 random[32]= 4c b6 3b 84 24 d7 93 7f 4b 09 fa f1 40 4f 04 6e af f7 92 e1 3b a7 3a c2 70 1d 34 dc 9d e5 1b c8 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [a number of other cipher suites] TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 299.9883 (299.9871) C>S TCP FIN 1 299.9883 (0.0000) S>C TCP FIN Scenario 2: Client and server do not share a common cipher suite This is a common scenario when really old browsers try to connect to servers with modern cipher suites. We have purposely configured our SSL profile to only accept one cipher suite (TLS_RSA_WITH_AES_256_CBC_SHA in this case). When we try connect to the virtual using a 128-bit key, the connection is immediately closed with no ServerHello from the virtual server. The differentiator here, while small, is the quick closure of the connection and the ‘TCP FIN’ that arises from the server. This is unlike the behavior of the missing SSL profile, because the server initiates the connection teardown and there is no connection timeout. The differences, while subtle, hint at the details of the problem: New TCP connection #1: 10.0.0.10(49342) <-> 10.0.0.20(443) 1 1 0.0010 (0.0010) C>SV3.1(48) Handshake ClientHello Version 3.1 random[32]= 4c b7 41 87 e3 74 88 ac 89 e7 39 2d 8c 27 0d c0 6e 27 da ea 9f 57 7c ef 24 ed 21 df a6 26 20 83 cipher suites TLS_RSA_WITH_AES_128_CBC_SHA Unknown value 0xff compression methods unknown value NULL 1 0.0011 (0.0000) S>C TCP FIN 1 0.0022 (0.0011) C>S TCP FIN Conclusion Troubleshooting TLS can be daunting at first, but an understanding of the TLS handshake can make troubleshooting much more approachable. We cannot exhibit every potential problem in this tech tip. However, we hope that walking through some of the more common examples will give you the tools necessary to troubleshoot other issues as they arise. Happy troubleshooting!7.9KViews0likes5CommentsConverting a Cisco ACE configuration file to F5 BIG-IP Format
In September, Cisco announced that it was ceasing development and pulling back on sales of its Application Control Engine (ACE) load balancing modules. Customers of Cisco’s ACE product line will now have to look for a replacement product to solve their load balancing and application delivery needs. One of the first questions that will come up when a customer starts looking into replacement products surrounds the issue of upgradability. Will the customer be able to import their current configuration into the new technology or will they have to start with the new product from scratch. For smaller businesses, starting over can be a refreshing way to clean up some of the things you’ve been meaning to but weren’t able to for one reason or another. But, for a large majority of the users out there, starting over from nothing with a new product is a daunting task. To help with those users considering a move to the F5 universe, DevCentral has included several scripts to assist with the configuration migration process. In our Codeshare section we created some scripts useful in converting ACE configurations into their respective F5 counterparts. https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip https://devcentral.f5.com/s/articles/Cisco-ACE-to-F5-Conversion-Python-3 https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip-via-tmsh We also have scripts covering Cisco’s CSS (https://devcentral.f5.com/s/articles/cisco-css-to-f5-big-ip ) and CSM products (https://devcentral.f5.com/s/articles/cisco-csm-to-f5-big-ip ) as well. In this article, I’m going to focus on the ace2f5-tmsh” in the ace2f5.zip script library. The script takes as input an ACE configuration and creates a TMSH script to create the corresponding F5 BIG-IP objects. ace2f5-tmsh.pl $ perl ace2f5-tmsh.pl ace_config > tmsh_script We could leave it at that, but I’ll use this article to discuss the components of the ACE configuration and how they map to F5 objects. ip The ip object in the ACE configuration is defined like this: ip route 0.0.0.0 0.0.0.0 10.211.143.1 equates to a tmsh “net route” command. net route 0.0.0.0-0 { network 0.0.0.0/0 gw 10.211.143.1 } rserver An “rserver” is basically a node containing a server address including an optional “inservice” attribute indicating whether it’s active or not. ACE Configuration rserver host R190-JOEINC0060 ip address 10.213.240.85 rserver host R191-JOEINC0061 ip address 10.213.240.86 inservice rserver host R192-JOEINC0062 ip address 10.213.240.88 inservice rserver host R193-JOEINC0063 ip address 10.213.240.89 inservice It will be used to find the IP address for a given rserver hostname. serverfarm A serverfarm is a LTM pool except that it doesn’t have a port assigned to it yet. ACE Configuration serverfarm host MySite-JoeInc predictor hash url rserver R190-JOEINC0060 inservice rserver R191-JOEINC0061 inservice rserver R192-JOEINC0062 inservice rserver R193-JOEINC0063 inservice F5 Configuration ltm pool Insiteqa-JoeInc { load-balancing-mode predictive-node members { 10.213.240.86:any { address 10.213.240.86 }} members { 10.213.240.88:any { address 10.213.240.88 }} members { 10.213.240.89:any { address 10.213.240.89 }} } probe a “probe” is a LTM monitor except that it does not have a port. ACE Configuration probe tcp MySite-JoeInc interval 5 faildetect 2 passdetect interval 10 passdetect count 2 will map to the TMSH “ltm monitor” command. F5 Configuration ltm monitor Insiteqa-JoeInc { defaults from tcp interval 5 timeout 10 retry 2 } sticky The “sticky” object is a way to create a persistence profile. First you tie the serverfarm to the persist profile, then you tie the profile to the Virtual Server. ACE Configuration sticky ip-netmask 255.255.255.255 address source MySite-JoeInc-sticky timeout 60 replicate sticky serverfarm MySite-JoeInc class-map A “class-map” assigns a listener, or Virtual IP address and port number which is used for the clientside and serverside of the connection. ACE Configuration class-map match-any vip-MySite-JoeInc-12345 2 match virtual-address 10.213.238.140 tcp eq 12345 class-map match-any vip-MySite-JoeInc-1433 2 match virtual-address 10.213.238.140 tcp eq 1433 class-map match-any vip-MySite-JoeInc-31314 2 match virtual-address 10.213.238.140 tcp eq 31314 class-map match-any vip-MySite-JoeInc-8080 2 match virtual-address 10.213.238.140 tcp eq 8080 class-map match-any vip-MySite-JoeInc-http 2 match virtual-address 10.213.238.140 tcp eq www class-map match-any vip-MySite-JoeInc-https 2 match virtual-address 10.213.238.140 tcp eq https policy-map a policy-map of type loadbalance simply ties the persistence profile to the Virtual . the “multi-match” attribute constructs the virtual server by tying a bunch of objects together. ACE Configuration policy-map type loadbalance first-match vip-pol-MySite-JoeInc class class-default sticky-serverfarm MySite-JoeInc-sticky policy-map multi-match lb-MySite-JoeInc class vip-MySite-JoeInc-http loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-https loadbalance vip inservice loadbalance vip icmp-reply class vip-MySite-JoeInc-12345 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-31314 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-1433 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class reals nat dynamic 1 vlan 240 class vip-MySite-JoeInc-8080 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply F5 Configuration ltm virtual vip-Insiteqa-JoeInc-12345 { destination 10.213.238.140:12345 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-1433 { destination 10.213.238.140:1433 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-31314 { destination 10.213.238.140:31314 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-8080 { destination 10.213.238.140:8080 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-http { destination 10.213.238.140:http pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} http {} } } ltm virtual vip-Insiteqa-JoeInc-https { destination 10.213.238.140:https profiles { tcp {} } Conclusion If you are considering migrating from Cicso’s ACE to F5, I’d consider you take a look at the Cisco conversion scripts to assist with the conversion.2.5KViews0likes6CommentsWriting to and rotating custom log files
Sometimes I need to log information from iRules to debug something. So I add a simple log statement, like this: when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "[IP::remote_addr] attempted to access /secure" } } This is fine, but it clutters up the /var/log/ltm log file. Ideally I want to log this information into a separate log file. To accomplish this, I first change the log statement to incorporate a custom string - I chose the string "##": when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "##[IP::remote_addr] attempted to access /secure" } } Now I have to customize syslog to catch this string, and send it somewhere other than /var/log/ltm. I do this by customizing syslog with an include statement: tmsh modify sys syslog include '" filter f_local0 { facility(local0) and not match(\": ##\"); }; filter f_local0_customlog { facility(local0) and match(\": ##\"); }; destination d_customlog { file(\"/var/log/customlog\" create_dirs(yes)); }; log { source(local); filter(f_local0_customlog); destination(d_customlog); }; "' save the configuration change: tmsh save / sys config and restarting the syslog-ng service: tmsh restart sys service syslog-ng The included "f_local0" filter overrides the built-in "f_local0" syslog-ng filter, since the include statement will be the last one to load. The "not match" statement is regex which will prevent any statement containing a “##” string from being written to the /var/log/ltm log. The next filter,"f_local0_customlog", catches the "##" log statement and the remaining include statements handle the job of sending them to a new destination which is a file I chose to name "/var/log/customlog". You may be asking yourself why I chose to match the string ": ##" instead of just "##". It turns out that specifying just "##" also catches AUDIT log entries which (in my configuration) are written every time an iRule with the string "##" is modified. But only the log statement from the actual iRule itself will contain the ": ##" string. This slight tweak keeps those two entries separated from each other. So now I have a way to force my iRule logging statements to a custom log file. This is great, but how do I incorporate this custom log file into the log rotation scheme like most other log files? The answer is with a logrotate include statement: tmsh modify sys log-rotate syslog-include '" /var/log/customlog { compress missingok notifempty }"' and save the configuration change: tmsh save / sys config Logrotate is kicked off by cron, and the change should get picked up the next time it is scheduled to run. And that's it. I now have a way to force iRule log statements to a custom log file which is rotated just like every other log file. It’s important to note that you must save the configuration with "tmsh save / sys config" whenever you execute an include statement. If you don't, your changes will be lost then next time your configuration is loaded. That's why I think this solution is so great - it's visible in the bigip_sys.conf file -not like customizing configuration files directly. And it's portable.2.4KViews0likes8CommentsDHCP Relay Virtual Server
BIG-IP LTM version 11.1 introduces the DHCP Relay Virtual Server. Previously, it was possible to forward the requests with a set of extensive iRules that probed deeply into the ways of binary, but with the new virtual server style, it is trivial. How DHCP Works DHCP is defined in RFC 2131 and RFC 2132 for clients and servers, as well as RFC 1542 for relay agents. The basic (successful) operation of a DHCP transaction between client and server is shown below. A client issues a broadcast in the DHCP Discover, one or more DHCP servers respond with an offer, the client responds with the binding IP address, and the server acknowledges. A DHCP Relay comes into play when a network grows beyond a handful of subnets and centralized control is desired. Because a DHCP Discover is a broadcast packet, it would never reach a centralized server as the packet would never cross the broadcast domain into another segment. So the job of a relay is to take that broadcast and package it as a unicast request and send on to the defined dhcp servers. Consider the test lab below: I have two dhcp servers configured on one side of a BIG-IP LTM VE, and a client configured for dhcp on the other. With no configuration on the LTM, the LTM receives the broadcast, but does nothing with it: 09:37:11.596823 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:14.689826 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:20.522498 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:27.609915 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 09:37:42.846379 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 Creating the Configuration The configuration is very simple. Create a pool of your dhcp servers, assigning IP and port as appropriate (port 67 for IPv4, port 547 for IPv6). The LB algorithm doesn’t matter, as all servers will receive the request. The virtual server configuration is equally simple. Name it, select the type as DHCP Relay, and then choose the IPv4 or IPv6 destination. Also, define the vlans this virtual should listen on. In my case, net106 where my dhcp client resides. Now, a dhcp discover from my client is forwarded as expected to my dhcp servers: 08:57:30.176648 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 08:57:30.176766 IP 192.168.106.5.bootps > 192.168.40.102.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 08:57:30.176771 IP 192.168.106.5.bootps > 192.168.40.103.bootps: BOOTP/DHCP, Request from 00:0c:29:99:0c:30, length: 300 Caveats In a chained configuration where there are multiple BIG-IP LTM’s between client and server, it will be necessary to preserve the source of the originating relay agent (the self IP of the first BIG-IP LTM receiving the broadcast). This is accomplished with a no-translate snat address: ltm snat dhcp-no-translate { origins { 192.168.106.5/32 { } } translation /Common/192.168.106.5 } as well as a now-unicast dhcp relay on the second BIG-IP LTM as shown in the diagram below For dhcp lease renewal, which is unicast, a forwarding virtual server should be configured (0.0.0.0:67/0.0.0.0) and a no-translate snat should be in place as well. Note that this is an optional parameter. If it is unsuccessful, the client will revert to broadcast. A request for enhancement that would include the no-translate and dhcp renewal configuration as part of the dhcp relay virtual server type selection has been submitted for consideration for future versions.2.1KViews0likes12CommentsA Brief Introduction To External Application Verification Monitors
Background EAVs (External Application Verification) monitors are one of most useful and extensible features of the BIG-IP product line. They give the end user the ability to utilize the underlying Linux operating system to perform complex and thorough service checks. Given a service that does not have a monitor provided, a lot of users will assign the closest related monitor and consider the solution complete. There are more than a few cases where a TCP or UDP monitor will mark a service “up” even while the service is unresponsive. EAVs give us the ability to dive much deeper than merely performing a 3-way handshake and neglecting the other layers of the application or service. How EAVs Work An EAV monitor is an executable script located on the BIG-IP’s file system (usually under /usr/bin/monitors) that is executed at regular intervals by the bigd daemon and reports its status. One of the most common misconceptions (especially amongst those with *nix backgrounds) is that the exit status of the script dictates the fate of the pool member. The exit status has nothing to do with how bigd interprets the pool member’s health. Any output to stdout (standard output) from the script will mark the pool member “up”. This is a nuance that should receive special attention when architecting your next EAV. Analyze each line of your script and make sure nothing will inadvertently get directed to stdout during monitor execution. The most common example is when someone writes a script that echoes “up” when the checks execute correctly and “down” when they fail. The pool member will be enabled by the BIG-IP under both circumstances rendering a useless monitor. Bigd automatically provides two arguments to the EAV’s script upon execution: node IP address and node port number. The node IP address is provided with an IPv6 prefix that may need to be removed in order for the script to function correctly. You’ll notice we remove the “::ffff://” prefix with a sed substitution in the example below. Other arguments can be provided to the script when configured in the UI (or command line). The user-provided arguments will have offsets of $3, $4, etc. Without further ado, let’s take a look at a service-specific monitor that gives us a more complete view of the application’s health. An Example I have seen on more than one occasion where a DNS pool member has successfully passed the TCP monitor, but the DNS service was unresponsive. As a result, a more invasive inspection is required to make sure that the DNS service is in fact serving valid responses. Let’s take a look at an example: #!/bin/bash # $1 = node IP # $2 = node port # $3 = hostname to resolve [[ $# != 3 ]] && logger -p local0.error -t ${0##*/} -- "usage: ${0##*/} <node IP> <node port> <hostname to resolve>" && exit 1 node_ip=$(echo $1 | sed 's/::ffff://') dig +short @$node_ip $3 IN A &> /dev/null [[ $? == 0 ]] && echo “UP” We are using the dig (Domain Information Groper) command to query our DNS server for an A record. We use the exit status from dig to determine if the monitor will pass. Notice how the script will never output anything to stdout other than “UP” in the case of success. If there aren’t enough arguments for the script to proceed, we output the usage to /var/log/ltm and exit. This is a very simple 13 line script, but effective example. The Takeaways The command should be as lightweight and efficient as possible If the same result can be accomplished with a built-in monitor, use it EAV monitors don’t rely on the command’s exit status, only standard output Send all error and informational messages to logger instead of stdout or stderr (standard error) “UP” has no significance, it is just a series of character sent to stdout, the monitor would still pass if the script echoed “DOWN” Conclusion When I first discovered EAV monitors, it opened up a whole realm of possibilities that I could not accomplish with built in monitors. It gives you the ability to do more thorough checking as well as place logic in your monitors. While my example was a simple bash script, BIG-IP also ships with Perl and Python along with their standard libraries, which offer endless possibilities. In addition to using the built-in commands and libraries, it would be just as easy to write a monitor in a compiled language (C, C++, or whatever your flavor may be) and statically compile it before uploading it to the BIG-IP. If you are new to EAVs, I hope this gives you the tools to make your environments more robust and resilient. If you’re more of a seasoned veteran, we’ll have more fun examples in the near future.2.1KViews0likes7CommentsHTTP to HTTPS Redirect Rewrite plus port change
Problem this snippet solves: Wrote this during Cisco ACE to F5 LTM migration where have the following Cisco ACE action list: action-list type modify http URL_RW_8443 ssl url rewrite location ".*" sslport 8443 This simply changes the HTTP Loction header in HTTP 301 and 302 responces from HTTP to HTTPS and sets the port. e.g. http://www.test.com/some/path -> https://www.test.com:8443/some/path Also found this had to work if the Location Header contained a porte.g. http://www.test.com:88/some/path -> https://www.test.com:8443/some/path I have put examples and how the iRule works in the code directly. How to use this snippet: This code changes HTTP Location Header in a HTTP Response from HTTP to HTTPS and adds/changes the port to be 8443. To set a different port locate the line following line and update the ':8443' to the correct port: lreplace $loc_list 2 2 '[lindex [split [lindex $loc_list 2] ":"] 0]:8443' Code : when HTTP_RESPONSE { if {[string tolower [HTTP::header Location]] starts_with "http://" }{ #Splits the Location Header string into a list # e.g. http://www.test.com/path1/path2/index.html = 'http:', '', 'www.test.com', 'path1', 'path2', 'index.html' set loc_list [split [HTTP::header Location] "/"] # Replaces list location 0 (first item) with 'https:' # e.g. list item 0 = 'http:' and is replaced with 'https:' lreplace $loc_list 0 0 "https:" # Appended the port number to list location 2 (the FQDN), if a port is already defined this will replaced # e.g. list item 2 = 'www.test.com:897' is replaced with 'www.test.com:8443' # e.g. list item 2 = 'www2.test.com' is replaced with 'www2.test.com:8443' lreplace $loc_list 2 2 '[lindex [split [lindex $loc_list 2] ":"] 0]:8443' # List items are joined back together with '/' inserted and set at the new HTTP Location Header # e.g. list = 'https:', '', 'www.test.com:8443', 'path1', 'path2', 'index.html' becomes 'https://www.test.com:8443/path1/path2/index.html' HTTP::header replace Location [join $loc_list "/"] } } Tested this on version: 11.51.3KViews1like0CommentsFriendly URL Redirection Scaling via iRules
The concept of a friendly URL is a pretty simple one. Basically you want to make things in your application, on your website, etc. easier to access. This stems from the fact that most applications these days make use of increasingly complex paths for a multitude of reasons. Whether it’s user specific content, auto generated pages or otherwise, typing in a URL that looks like “http://domain.com/a7391/users/0928179/events/live/release/20110403/regions.php?region=atl” isn’t something that’s easy or frankly even realistic for a user. I’m not going to remember that URL, and if I’m an 8 on the geek scale, certainly the 3s and 4s of the world won’t be able to manage that kind of a URL either. Nevertheless, these sort of paths are common amongst robust applications. Enter friendly URL redirection. To combat the plague of uncivilized URLs many people, including DevCentral, turn to friendly URL redirection. That is, they make up a shorter, more usable URL and hand that out to users instead. Then, when a user accesses that URL, they get directed to the appropriate content. For instance, using the example above of a make believe user group in Atlanta “http://domain.com/ug/atl” might very well redirect me to the appropriate URL while being something humans can actually remember and reproduce when it comes time to type in a URL. To make this process even easier this is something that iRules can handle very smoothly. Simple HTTP::redirect lists in a switch or the like work great for getting started with this process, something like: when HTTP_REQUEST { switch -glob [string tolower [HTTP::uri]] { "/ug/atl" { HTTP::redirect "http://[HTTP::host]/a7391/users/0928179/events/live/release/20110403/regions.php?region=atl" } "/ug/sea" { HTTP::redirect "http://[HTTP::host]/a2416/users/0622375/events/live/release/20100602/regions.php?region=sea" } } } A very simple example, but you get the idea. Look for the short URL, redirect to the appropriate (usually longer, more complex) URL. All right, that’s well and good, but where does the scaling come into play, you ask? Well this concept is fine and dandy for 2 or 3 or even up to say 10 or 15 redirects. What if you have 100? 1000? Are you going to maintain a switch with 1000 cases? If your answer was yes turn off your computer, and go seek professional help. For those that are still here…of course you’re not. You’re going to find a way to make that management much simpler and re-use more generic logic. As with most cases in an iRule when someone tells me they need to store several (more than a hundred, less than a million) records and parse through them at will, here will be using a class. With the semi-recent improvements to classes in both performance and scale they’re suited wonderfully to this kind of task. What we’ll need is a class that contains these mappings and some simple logic to parse the class, recall the key->value info as necessary and redirect based off of that. First the class: 1: class "redirurls" { 2: "/ug/atl" {"/a7391/users/0928179/events/live/release/20110403/regions.php?region=atl"} 3: "/ug/sea" {"/a2416/users/0622375/events/live/release/20100602/regions.php?region=sea"} 4: "/ug/nyc" {"/a1753/users/0524611/events/live/release/20100714/regions.php?region=nyc"} 5: "/ug/la" {"/a6542/users/0316327/events/live/release/20100312/regions.php?region=la"} 6: ... 7: } This could go on for however many entries you want, obviously. Also note that not all of them must be in the same format. It’s just coincidence (and efficiency) that my examples all look the same. It could be any incoming URI you want to redirect. Now that we have a class to parse, we need an iRule to do the parsing. This is a pretty simple setup but that’s kind of the idea: 1: when HTTP_REQUEST { 2: set newuri [class match -value [string tolower [HTTP::uri]] equals redirurls] 3: if {$newuri ne ""} { 4: HTTP::redirect "http://[HTTP::host]$newuri" 5: unset newuri 6: } 7: } As you can see the actual iRules code required is extremely simple. All you need is a class match to return the value in the class when searching based on the URI that’s coming in. If there’s a match, the HTTP::redirect fires and sends the user to the appropriate URI and they’re on their way. There are hundreds of tweaks, customizations and embellishments that could be made here, so feel free to think outside the box. This is just a basic look at the concept. From here the sky’s the limit. So there you have it, a scalable way to handle Friendly URL Redirection via iRules.1.1KViews0likes3CommentsProblems Overcome During a Major LTM Software/Hardware Upgrade
I recently completed a successful major LTM hardware and software migration which accomplished two high-level goals: · Software upgrade from v9.3.1HF8 to v10.1.0HF1 · Hardware platform migration from 6400 to 6900 I encountered several problems during the migration event that would have stopped me in my trackshad I not (in most cases) encountered them already during my testing. This is a list of those issues and what I did to address them. While I may not have all the documentation about these problems or even fully understand all the details, the bottom line is that they worked. My hope is that someone else will benefit from it when it counts the most (and you know what Imean). Problem #1 – Unable to Access the Configuration Utility (admin GUI) The first issue I had to resolve was apparent immediately after the upgrade finished. When I tried to access the Configuration utility, I was denied: Access forbidden! You don't have permission to access the requested object. Error 403 I happened to find the resolution in SOL7448: Restricting access to the Configuration utility by source IP address. The SOL refers to bigpipe commands, which is what I used initially: bigpipe httpd allow all add bigpipe save Since then, I’ve developed the corresponding TMSH commands, which is F5’s long-term direction toward managing the system: tmsh modify sys httpd allow replace-all-with {all} tmsh save / sys config Problem #2 – Incompatible Profile I encountered the second issue after the upgraded configuration was loaded for the first time: [root@bigip2:INOPERATIVE] config # BIGpipe unknown operation error: 01070752:3: Virtual server vs_0_0_0_0_22 (forwarding type) has an incompatible profile. By reviewing the /config/bigip.conf file, I found that my forwarding virtual servers had a TCP profile applied: virtual vs_0_0_0_0_22 { destination any:22 ip forward ip protocol tcp translate service disable profile custom_tcp } Apparently v9 did not care about this, but v10 would not load until I manually removed these TCP profile referencesfrom all of my forwarding virtual servers. Problem #3 – BIGpipe parsing error Then I encountered a second problem while attempting to load the configuration for the first time: BIGpipe parsing error (/config/bigip.conf Line 6870): 012e0022:3: The requested value (x.x.x.x:3d-nfsd {) is invalid (show | <pool member list> | none) [add | delete]) for 'members' in 'pool' While examining this error, I noticed that the port number was translated into a service name – “3d-nfsd”. Fortunately during my initial v10 research, I came across SOL11293 - The default /etc/services file in BIG-IP version 10.1.0 contains service names that may cause a configuration load failure. While I had added a step in my upgrade process to prevent the LTM from service translation, it was notscheduled until after the configuration had been successfully loaded on the new hardware. Instead I had to move this step up in the overall process flow: bigpipe cli service number b save The corresponding TMSH commands are: tmsh modify cli global-settings service number tmsh save / sys config Problem #4 – Command is not valid in current event context This was the final error we encountered when trying to load the upgraded configuration for the first time: BIGpipe rule creation error:01070151:3: Rule [www.mycompany.com] error: line 28: [command is not valid in current event context (HTTP_RESPONSE)] [HTTP::host] While reviewing the iRule it was obvious that we had a statement which didn’t make any sense, since there is no Host header in an HTTP response. Apparently it didn’t bother v9, but v10 didn’t like it: when HTTP_RESPONSE { switch -glob[string tolower [HTTP::host]] { <do some stuff> } } We simply removed that event from the iRule. Problem #5: Failed Log Rotation After I finished my first migration, I found myself in a situation where none of the logs in the /var/log directory were not being rotated. The /var/log/secure log file held the best clue about the underlying issue: warning crond[7634]: Deprecated pam_stack module called from service "crond" I had to open a case with F5, who found that the PAM crond configuration file (/config/bigip/auth/pam.d/crond) had been pulled from the old unit: # # The PAM configuration file for the cron daemon # # auth sufficient pam_rootok.so auth required pam_stack.so service=system-auth auth required pam_env.so account required pam_stack.so service=system-auth session required pam_limits.so #session optional pam_krb5.so I had to update the file from a clean unit (which I was fortunate enough to have at my disposal): # # The PAM configuration file for the cron daemon # # auth sufficient pam_rootok.so auth required pam_env.so auth include system-auth account required pam_access.so account sufficient pam_permit.so account include system-auth session required pam_loginuid.so session include system-auth and restart crond: bigstart restart crond or in the v10 world: tmsh restart sys service crond Problem #6: LTM/GTM SSL Communication Failure This particular issue is the sole reason that my most recent migration process took 10 hours instead of four. Even if you do have a GTM, you are not likely to encounter it since it was a result of our own configuration. But I thought I’d include it since it isn’t something you’ll see documented by F5. One of the steps in my migration plan was to validate successful LTM/GTM communication with iqdump. When I got to this point in the migration process, I found that iqdump was failing in both directions because of SSL certificate verification despite having installed the new Trusted Server Certificate on the GTM, and Trusted Device Certificates on both the LTM and GTM. After several hours of troubleshooting, I decided to perform a tcpdump to see if I could gain any insight based on what was happening on the wire. I didn’t notice it at first, but when I looked at the trace again later I noticed the hostname on the certificate that the LTM was presenting was not correct. It was a very small detail that could have easily been missed, but was the key in identifying the root cause. Having dealt with Device Certificates in the past, I knew that the Device Certificate file was /config/httpd/conf/ssl.crt/server.crt. When I looked in that directory on the filesystem, there I found a number of certificates (and subsequently, private keys in /config/httpd/conf/ssl.key) that should not have been there. I also found that these certificates and keys were pulled from the configuration on the old hardware. So I removed the extraneous certificates and keys from these directories and restarted the httpd service (“bigstart restart httpd”, or “tmsh restart sys service crond”). After I did that, the LTM presented the correct Device Certificate and LTM/GTM communication was restored. I'm still not sure to this day how those certificates got there in the first place...827Views0likes3Comments10 Ways to HA (and counting): a treatise on BIG-IP high availability
I had a conversation with someone not too long ago on the subject of BIG-IP high availability. BIG-IP is primarily a load balancer, so some forms of high availability (load balancing across multiple web servers for instance) are obvious. But as you probably well know, there are several other characteristics of BIG-IP that can create high availability. The dialog started off slowly (load balancing, health monitors, blah, blah, etc., etc.), then as the possibilities started stacking up the conversation got lively and it became sort of a game. Out came the whiteboard and after a few more ideas – we had 10 ways to HA. As time went by the game itself evolved in my mind. Sure “10 ways to HA” is sort of catchy, but I knew there was more. I continued to share this idea with colleagues and eventually hit a happy 14 (though there are definitely more) which I’d like to share with you now. Before I begin let me be clear that not all high availability characteristics are necessarily attributes of BIG-IP (most are), but rather part of the environment where BIG-IP is a key player. We tried to think of every conceivable reason why a user couldn’t access an application and thought about how the environment could defend against that. Let’s start with a picture: BIG-IP installed in pairs: Installing them in pairs, typically active/standby ensures that the failure of one BIG-IP does not bring down your applications. In fact with a hardware heartbeat connection between them, failover time is measured in microseconds. Redundant switches: Switches and/or routers are typically deployed in front and/or behind the BIG-IP to increase port density. Multiple switches, with similar failover capabilities, also ensure there’s always a network path from client to application. Shared IP addresses: When you create a VLAN and assign self-IP addresses, you can also create “floating” IP addresses that span both members of the BIG-IP pair. This ensures proper layer 3 routing if a BIG-IP should fail as the return IP address is always the same. Shared MAC addresses: Along with floating IP addresses, you can define “masquerading” MAC addresses that ensure a proper layer 2 path if a BIG-IP should fail. Trunking: Link aggregation and 802.1q “VLAN tagging” not only allow the aggregation of bandwidth from multiple physical ports, but also provides redundancy to a VLAN should a physical port connection fail. Load balancing: This is really a no-brainer, but load balancing across multiple services ensures that the application can support greater numbers of request without overloading a single server. Health monitoring: Where load balancing spreads application traffic across multiple services, health monitoring ensures that no requests are sent to services that have failed. This, in my opinion is one of the coolest and most powerful attributes of the BIG-IP. Health monitors can monitor and interact with applications and servers at pretty much every level. Transparent monitoring: So cool it deserves its own title, transparent monitors can monitor through a device (like a router or switch between endpoints), essentially giving you high availability along a path. Global load balancing: Where local load balancing leaves off, global load balancing ensures high availability across datacenters, across WANs, across the planet! So rest assured should Godzilla attack your Japan office, your application will still be accessible from another datacenter. Global load balancing monitors: There are monitors at the global level that can monitor the load balancers that are monitoring your applications. VMware integration: This is a great feature that employs F5’s iControl capability. You provision offline resources in your VM environment and when VirtualCenter detects the passing of some pre-defined threshold (memory, processor, or user concurrency overload) it turns those resources on. Once they are up and ready to start taking some of the load, VirtualCenter contacts the BIG-IP through the iControl interface indicating the IP addresses of the new VMs. The BIG-IP automatically adds those addresses to the load balancing pool. No intervention required, providing a dynamic, self provisioning environment that grows as customer demand increases. Session state sharing: Not so much a BIG-IP thing, but most modern web servers allow their application session states to be shared, usually in a database or another “state server”. So typically, when you log into an application that needs to maintain session state, it returns a session token that it uses to track your movement and ensure authentication. That session token (usually a cookie) contains a unique identifier that maps to a piece of memory in the web server’s session table. So while the BIG-IP maintains persistence to that server, if the application fails and you must be sent to another server, your session is gone and you’ll likely have to start over. But if you allow the web servers to share session state, that in-memory table is replaced with something that is accessible to all of the application servers. You can then literally shut servers off for maintenance in the middle of the afternoon and never interrupt user sessions. Session mirroring: It’s a little known feature of BIG-IP that allows it to share or rather mirror session information between peers (that’s persistence information, anything stored in the session table for a user, etc.). iRules: And finally there’s iRules. Wait, what? iRules create high availability? Why yes, they do. I could go on and on about the coolness of iRules, but for starters there’s events like LB_FAILED that are designed specifically to catch availability failures, and commands like HTTP::retry that allow you to retry a request if it originally failed. You could even use iRules to replicate some of the functions of an application. The sky is really the limit. And there you have it, 10 – err, I mean 14 ways to HA! Let nothing stand in our way - Muhaha! Seriously though, there are so many different ways to achieve high availability in a BIG-IP environment. I left out database clustering (also not really a BIG-IP thing but the BIG-IP SQL monitors are awesome!), and fast data replication and de-duplication across WAN links with iSessions. There’s also Access Policy Manager (APM) credential caching, VMware VMotion across datacenters with the Wan Optimization Module (WOM), Edge Gateway’s “always connected” capability, and link load balancing with Link Controller. This has turned out to be a pretty entertaining topic amongst my geeky colleagues. I now challenge you to keep thinking about ways to achieve high availability in a BIG-IP environment. Who’ll be the first to hit 20?! Thanks. Kevin Stewart821Views0likes3Comments