on 12-Mar-2015 15:30
Problem this snippet solves:
Super HTTP supports GET and POST requests, HTTP 1.0 and 1.1, Host headers, User-Agent headers, HTTP and HTTPS. It supports cookies. It supports authentication (basic, digest, and ntlm). It supports checking through a proxy. Most notably it supports chains of HTTP requests with cookie preservation between them, which I think will be very useful for LTM and GTM allowing you to validate end-to-end functionality.
Note that this monitor will do just about whatever you need, but if you just want to a simple HTTP monitor try the built-in monitor first. Although this is fairly efficient (i.e. doesn't do more work than it needs to) it can never be anywhere as efficient as the built-in monitor.
Note that the native HTTP/S monitors now support NTLM / NTLMv2 authentication.
How to use this snippet:
Create a new file containing the code below in /usr/bin/monitors on the LTM filesystem. Permissions on the file must be 700 or better, giving root rwx access to the file. See comments within the code for documentation.
Code :
#!/bin/bash # (c) Copyright 2007 F5 Networks, Inc. # Kirk Bauer# Version 1.3, Aug 5, 2010 # Revision History # 8/5/10: Version 1.3: Fixed problem with cookie parsing # 12/16/09: Version 1.2: Fixed problem with NTLM health checks # 3/11/07: Version 1.1: Added ability for multiple regexes in MATCH_REGEX # 2/28/07: Version 1.0.1: Initial Release # When defining an external monitor using this script, the argument # field may contain the path for the request. In addition a large # number of variables may be defined as described below. # # The argument field can contain an optional path for the request, # if nothing is specified the default path of / is assumed. This # is also where you can put query parameters. Some examples: # /index.asp # /verify_user.html?user=testuser # # This script can retrieve a chain of URLs. This is useful for two # scenarios. The first is if you want to check a number of different # pages on a site, you can use one custom monitor that checks all of # them instead of defining a bunch of separate monitors. # # The other scenario is when you want to perform a test that requires # more than one page in sequence, such as something that tracks state # with cookies. In order to do a test login on some sites, for example, # you must first go to one URL where you are assigned a cookie, then # you must login through another URL with that cookie along with your # username/password. This script automatically stores and sends cookies # for chains of requests. # # The next section describes per-request variables. These are options # that can be specified a number of times for a number of separate # requests. In the most basic case, you must specify the URI_PATH for # each request. The only exception is that the last path is taken from # the "argument" string if URI_PATH is not specified. So, to do three # requests in a row, specify: # URI_PATH_1=/path/to/request1 # STATUS_CODE_1=200 # URI_PATH_2=/path/to/request2 # STATUS_CODE_2=200 # URI_PATH=/path/to/request3 # MATCH_REGEX="you are logged in" # # It is important to understand that there is always at least one # request and that last request uses variables with no number appended # to them. All other requests are done in numerical order before that # last request. If you have more than 10 requests you need to use # 2-digit numbers instead of 1 in the example above. # ############################################################# # Per-request Variables # (names provided are for the last (possibly only) request, # for other requests append a _# on the end as described # above). ############################################################# # Define the request: # URI_PATH: the full path you want to request, such as /index.html. # This is required for every request, except that it need not # be defined for the last (sometimes onyl) request if you specify # the path in the "argument" field. You may include a query string # at the end, like /index.html?test=true # QUERY_STRING: You may specify the GET query string here instead of # appending it to the URI_PATH. Example: # name1=value1&name2=value2 # NODE_ADDR: The IP address to connect to. By default this will be # the pool member IP that is being checked. Can also be a hostname # if DNS resolution works on the BIG-IP. # NODE_PORT: The port to connect to. By default this will be the # port of the pool member being checked. # PROTOCOL: Either http or https. If not specified, assumed to be # http unless the port is 443. # POST_DATA: You may define post data to make this a POST request, # such as: # name1=value1&name2=value2 # HOST_HEADER: The host header to send to the remote server. Default # will be the value of NODE_ADDR. # REFERER: The referer URL to send in the request (this variable is # misspelled just like the HTTP header is). # # Authentication options for each request: # USERNAME: provide this username to the webserver # PASSWORD: provide this password to the webserver # AUTHTYPE: "basic", "digest", or "ntlm" (default is basic) # # The following variables may be defined and determine what constitutes # an "up" status. If none of these are specified, the script will return # "up" only if the web server returns a status of 200 (OK). Any or all # of these may be specified. # HTTPS_HOSTNAME: you may optionally specify the hostname that the # certificate should present for https checks. # STATUS_CODE: numerical status code to match # NOT_STATUS_CODE: numerical status code that shouldn't be matched # MATCH_REGEX: regular expression that should be matched in the headers # or body. OPTIONAL: multiple regexes to match may be specified using # the format: # MATCH_REGEX = ®ex1®ex2®ex3 # If using multiple regexes, you must start the string with & and # the regexes themselves cannot contain the & character # NOT_MATCH_REGEX: regular expression that should not be matched in the # headers or body. # ############################################################# # Cookies ############################################################# # You can set any number of cookies by specifying one or more variables # named COOKIE_Name. So if you set the variables COOKIE_country = usa # and COOKIE_language = en, then the cookie string would be # "Cookie: country=usa; language=en". These cookies will be sent for # every request. If you are doing multiple requests then any cookies # sent by the server will replace any existing cookie of the same name # in future requests. This script does not consider domain or path # but instead just sends all cookies for all requests. # ############################################################# # Global Variables ############################################################# # HTTP/HTTPS Options (apply to all requests): # USER_AGENT: set to the user agent string you want to send. # Default is something similar to: # curl/7.15.3 (i686-redhat-linux-gnu) libcurl/7.15.3 OpenSSL/0.9.7i zlib/1.1.4 # HTTP_VERSION: set to "1.0" or "1.1", defaults to 1.1. # SSL_VERSION: set to "tlsv1", "sslv2", or "sslv3". # CIPHERS: override SSL ciphers that can be used (see "man # ciphers"), default is "DEFAULT". # # Global Proxy Settings (optional): # PROXY_HOST: IP address of the proxy to use (or hostname if DNS resolution works) # PROXY_PORT: Port to connect to on the proxy (required if PROXY_HOST is specified) # PROXY_TYPE: "http", "socks4", or "socks5" (defaults to http) # PROXY_AUTHTYPE: "basic", "digest", or "ntlm" (basic is default if a username is specified) # PROXY_USERNAME: username to provide to the proxy # PROXY_PASSWORD: password to provide to the proxy # # Other Variables: # LOG_FAILURES: set to "1" to enable logging of failures which will # log monitor failures to /var/log/ltm (viewable in the GUI under # System -> Logs -> Local Traffic(tab) # LOG_COOKIES: set to "1" to log cookie activity to /var/log/ltm (also # logs each request as it is made). # DEBUG: set to "1" to create .output and .trace files in /var/run for # each request for debugging purposes. SCRIPTNAME=${MON_TMPL_NAME:-$0} # Collect arguments global_node_ip=$(echo "$1" | sed 's/::ffff://') global_port="${2:-80}" [ -z "$URI_PATH" ] && URI_PATH="${3:-/}" # Handle PID file pidfile="/var/run/$SCRIPTNAME.$global_node_ip.$global_port.pid" tmpfile="/var/run/$SCRIPTNAME.$global_node_ip.$global_port.tmp" [ -f "$pidfile" ] && kill -9 $(cat $pidfile) >/dev/null 2>&1 rm -f "$pidfile" ; echo "$$" > "$pidfile" rm -f "$tmpfile" fail () { [ -n "$LOG_FAILURES" ] && [ -n "$*" ] && logger -p local0.notice "$SCRIPTNAME($global_node_ip:$global_port): $*" rm -f "$tmpfile" rm -f "$pidfile" exit 1 } make_request () { # First argument is blank for last request or "_#" for others local id="$1" # Collect the arguments to use for this request, first start with ones # that have default values if not specified local node_ip="$global_node_ip" [ -n "$(eval echo \$NODE_ADDR$id)" ] && node_ip="$(eval echo \$NODE_ADDR$id)" local port="$global_port" [ -n "$(eval echo \$NODE_PORT$id)" ] && port="$(eval echo \$NODE_PORT$id)" local protocol="http" [ "$port" -eq "443" ] && protocol="https" [ -n "$(eval echo \$PROTOCOL$id)" ] && protocol="$(eval echo \$PROTOCOL$id)" # Now the rest come straight from the environment variables local authtype="$(eval echo \$AUTHTYPE$id)" local username="$(eval echo \$USERNAME$id)" local password="$(eval echo \$PASSWORD$id)" local host_header="$(eval echo \$HOST_HEADER$id)" local referer="$(eval echo \$REFERER$id)" local uri_path="$(eval echo \$URI_PATH$id)" local query_string="$(eval echo \$QUERY_STRING$id)" [ -n "$query_string" ] && query_string="?$query_string" local post_data="$(eval echo \$POST_DATA$id)" local https_hostname="$(eval echo \$HTTPS_HOSTNAME$id)" local status_code="$(eval echo \$STATUS_CODE$id)" local not_status_code="$(eval echo \$NOT_STATUS_CODE$id)" local match_regex="$(eval echo \$MATCH_REGEX$id)" local not_match_regex="$(eval echo \$NOT_MATCH_REGEX$id)" # Determine what we are checking for [ -z "$match_regex" ] && [ -z "$not_match_regex" ] && [ -z "$status_code" ] && [ -z "$not_status_code" ] && status_code=200 [ -n "$https_hostname" ] && [ "$protocol" == "https" ] && { # The cert will contain a hostname but curl is going by IP so it will fail but give us the hostname in the error local actual_ssl_hostname=$(curl $global_args --cacert '/config/ssl/ssl.crt/ca-bundle.crt' "$protocol://$node_ip:$port$uri_path$query_string" 2>&1 | sed -n "s/^.*SSL: certificate subject name '\(.*\)' does not match target host name.*$/\1/p") [ "$actual_ssl_hostname" == "$https_hostname" ] || fail "HTTPS Hostname '$actual_ssl_hostname' does not match HTTPS_HOSTNAME$id=$https_hostname" } # Determine argument string for curl local args="" [ -n "$host_header" ] && args="$args --header 'Host: $host_header'" [ -n "$referer" ] && args="$args --referer '$referer'" [ -n "$post_data" ] && args="$args --data '$post_data'" # IP used in URL will never match hostname in cert, use HTTPS_HOSTNAME to check separately [ "$protocol" == "https" ] && args="$args --insecure" [ -n "$DEBUG" ] && args="$args --trace-ascii '$tmpfile.trace$id'" [ -n "$username" ] && { # Specify authentication information args="$args --user '$username:$password'" [ "$authtype" == "digest" ] && args="$args --digest" [ "$authtype" == "ntlm" ] && args="$args --ntlm" } # Determine cookies to send, if any local cookie_str="" for i in ${!COOKIE_*} ; do cookie_name=$(echo $i | sed 's/^COOKIE_//') cookie_str="$cookie_str; $cookie_name=$(eval echo "$"$i)" done cookie_str="$(echo "$cookie_str" | sed 's/^; //')" [ -n "$LOG_COOKIES" ] && logger -p local0.notice "$SCRIPTNAME($global_node_ip:$global_port): $protocol://$node_ip:$port$uri_path$query_string: cookie string [$cookie_str]" [ -n "$cookie_str" ] && args="$args --cookie '$cookie_str'" # Make request eval curl -i $global_args $args "'$protocol://$node_ip:$port$uri_path$query_string'" >"$tmpfile" 2>/dev/null || fail "$protocol://$node_ip:$port$uri_path$query_string: Request failed: $!" [ -n "$DEBUG" ] && cp "$tmpfile" "$tmpfile.debug$id" # Validate Check Conditions [ -n "$status_code" ] || [ -n "$not_status_code" ] && { local actual_status_code=$(head -n 1 "$tmpfile" | sed "s/^HTTP\/.\.. \([0123456789][0123456789][0123456789]\) .*$/\1/") [ "$actual_status_code" -eq 401 ] && [ "$authtype" == "ntlm" ] && { # Skip past 401 Unauthorized response and look at second response code actual_status_code=$(grep '^HTTP/' "$tmpfile" | tail -n 1 | sed "s/^HTTP\/.\.. \([0123456789][0123456789][0123456789]\) .*$/\1/") } [ -n "$status_code" ] && [ "$actual_status_code" -ne "$status_code" ] && fail "$protocol://$node_ip:$port$uri_path$query_string: Status code ($actual_status_code) not what was expected (STATUS_CODE$id=$status_code)" [ -n "$not_status_code" ] && [ "$not_status_code" -eq "$status_code" ] && fail "$protocol://$node_ip:$port$uri_path$query_string: Status code ($actual_status_code) was what was not expected (NOT_STATUS_CODE$id=$not_status_code)" } [ -n "$match_regex" ] && { if echo "$match_regex" | grep -q '^&' ; then IFS="&" match_regex="$(echo "$match_regex" | sed 's/^&//')" for regex in $match_regex ; do egrep -q "$regex" "$tmpfile" || fail "$protocol://$node_ip:$port$uri_path$query_string: Did not find [MATCH_REGEX$id=$regex] in response" done unset IFS else egrep -q "$match_regex" "$tmpfile" || fail "$protocol://$node_ip:$port$uri_path$query_string: Did not find [MATCH_REGEX$id=$match_regex] in response" fi } [ -n "$not_match_regex" ] && egrep -q "$not_match_regex" "$tmpfile" && fail "$protocol://$node_ip:$port$uri_path$query_string: Found [NOT_MATCH_REGEX$id=$not_match_regex] in response" # Store cookies from response for next request (if any) [ -z "$id" ] && return `sed -n "s/^Set-Cookie: \([^=]\+\)=\([^;]\+\);.*$/export COOKIE_\1='\2';/ip" "$tmpfile"` } # Build global option string global_args="" [ "$HTTP_VERSION" == "1.0" ] && global_args="$global_args --http1.0" [ "$SSL_VERSION" == "tlsv1" ] && global_args="$global_args --tlsv1" [ "$SSL_VERSION" == "sslv2" ] && global_args="$global_args --sslv2" [ "$SSL_VERSION" == "sslv3" ] && global_args="$global_args --sslv3" [ -n "$USER_AGENT" ] && global_args="$global_args --user-agent '$USER_AGENT'" [ -n "$CIPHERS" ] && global_args="$global_args --ciphers '$CIPHERS'" [ -n "$PROXY_HOST" ] && [ -n "$PROXY_PORT" ] && { if [ "$PROXY_TYPE" == "socks4" ] ; then global_args="$global_args --socks4 '$PROXY_HOST:$PROXY_PORT'" elif [ "$PROXY_TYPE" == "socks5" ] ; then global_args="$global_args --socks5 '$PROXY_HOST:$PROXY_PORT'" else global_args="$global_args --proxy '$PROXY_HOST:$PROXY_PORT'" fi [ -n "$PROXY_USERNAME" ] && { global_args="$global_args --proxy-user '$PROXY_USERNAME:$PROXY_PASSWORD'" [ "$PROXY_AUTHTYPE" == "digest" ] && global_args="$global_args --proxy-digest" [ "$PROXY_AUTHTYPE" == "ntlm" ] && global_args="$global_args --proxy-ntlm" } } requests="$(echo ${!URI_PATH_*} | sort)" for request in $requests ; do id=$(echo $request | sed 's/^URI_PATH//') make_request "$id" done # Perform last request make_request "" # If we got here without calling fail() and exiting, status was good rm -f "$tmpfile" echo "up" rm -f "$pidfile" exit 0
How much impact would this be to a larger deployment? I have concerns based on the external monitor documentation and the seriousness in which it is worded:
The cookie handling support would be something most desirable for a particular application we have... but I'm afraid to implement something like the above based on the dual warnings (here and above).