X-Forwarded-For Log Filter for Windows Servers
For those that don't know what X-Forwarded-For is, then you might as well close your browser because this post likely will mean nothing to you… A Little Background Now, if you are still reading this, then you likely are having issues with determining the origin client connections to your web servers. When web requests are passed through proxies, load balancers, application delivery controllers, etc, the client no longer has a direct connection with the destination server and all traffic looks like it's coming from the last server in the chain. In the following diagram, Proxy2 is the last hop in the chain before the request hits the destination server. Relying on connection information alone, the server thinks that all connections come from Proxy2, not from the Client that initiated the connection. The only one in the chain here who knows who the client really is (as determined by it's client IP Address, is Proxy1. The problem is that application owners rely on source client information for many reasons ranging from analyzing client demographics to targeting Denial of Service attacks. That's where the X-Forwarded-For header comes in. It is non-RFC standard HTTP request header that is used for identifying the originating IP address of a client connecting to a web server through a proxy. The format of the header is: X-Forwarded-For: client, proxy1, proxy, … X-Forwarded-For header logging is supported in Apache (with mod_proxy) but Microsoft IIS does not have a direct way to support the translation of the X-Forwarded-For value into the client ip (c-ip) header value used in its webserver logging. Back in September, 2005 I wrote an ISAPI filter that can be installed within IIS to perform this transition. This was primarily for F5 customers but I figured that I might as well release it into the wild as others would find value out of it. Recently folks have asked for 64 bit versions (especially with the release of Windows 2008 Server). This gave me the opportunity to brush up on my C skills. In addition to building targets for 64 bit windows, I went ahead and added a few new features that have been asked for. Proxy Chain Support The original implementation did not correctly parse the "client, proxy1, proxy2,…" format and assumed that there was a single IP address following the X-Forwarded-For header. I've added code to tokenize the values and strip out all but the first token in the comma delimited chain for inclusion in the logs. Header Name Override Others have asked to be able to change the header name that the filter looked for from "X-Forwarded-For" to some customized value. In some cases they were using the X-Forwarded-For header for another reason and wanted to use iRules to create a new header that was to be used in the logs. I implemented this by adding a configuration file option for the filter. The filter will look for a file named F5XForwardedFor.ini in the same directory as the filter with the following format: [SETTINGS] HEADER=Alternate-Header-Name The value of "Alternate-Header-Name" can be changed to whatever header you would like to use. Download I've updated the original distribution file so that folks hitting my previous blog post would get the updates. The following zip file includes 32 and 64 bit release versions of the F5XForwardedFor.dll that you can install under IIS6 or IIS7. Installation Follow these steps to install the filter. Download and unzip the F5XForwardedFor.zip distribution. Copy the F5XForwardedFor.dll file from the x86\Release or x64\Release directory (depending on your platform) into a target directory on your system. Let's say C:\ISAPIFilters. Ensure that the containing directory and the F5XForwardedFor.dll file have read permissions by the IIS process. It's easiest to just give full read access to everyone. Open the IIS Admin utility and navigate to the web server you would like to apply it to. For IIS6, Right click on your web server and select Properties. Then select the "ISAPI Filters" tab. From there click the "Add" button and enter "F5XForwardedFor" for the Name and the path to the file "c:\ISAPIFilters\F5XForwardedFor.dll" to the Executable field and click OK enough times to exit the property dialogs. At this point the filter should be working for you. You can go back into the property dialog to determine whether the filter is active or an error occurred. For II7, you'll want to select your website and then double click on the "ISAPI Filters" icon that shows up in the Features View. In the Actions Pane on the right select the "Add" link and enter "F5XForwardedFor" for the name and "C:\ISAPIFilters\F5XForwardedFor.dll" for the Executable. Click OK and you are set to go. I'd love to hear feedback on this and if there are any other feature request, I'm wide open to suggestions. The source code is included in the download distribution so if you make any changes yourself, let me know! Good luck and happy filtering! -Joe13KViews0likes14CommentsTwo-Factor Authentication With Google Authenticator And LDAP
Introduction Earlier this year Google released their time-based one-time password (TOTP) solution named Google Authenticator. A TOTP is a single-use code with a finite lifetime that can be calculated by two parties (client and server) using a shared secret and a synchronized clock (see RFC 4226 for additional information). In the case of Google Authenticator, the TOTP are generated using a software (soft) token on a mobile device. Google currently offers applications for the Apple iPhone, Android-based devices, and Blackberry handsets. A user authenticating with a Google Authenticator-enabled service will require the possession of this software token. In order for the token to be effective, it must not be able to be duplicated and the shared secret should be closely guarded. Google Authenticator’s soft token solution offer a number of advantages over other commercially available solutions. It is free to use (all applications are free to download), the TOTP algorithm is open source, well-known, and well-tested, and finally it does not require a dedicated server for processing tokens. While certain potential weakness in SHA-1 have been identified, none of them can be exploited within the 30-second timeframe of the TOTP’s usability. For all intents and purposes, SHA-1 is reasonably secure, well-tested, and purpose-appropriate for this application. The algorithm however is only as secure as the users and administrators are at protecting the shared secret used in token processing. Calculating The Google Authenticator TOTP The Google Authenticator TOTP is calculated by generating an HMAC-SHA1 token, which uses a 10-byte base32-encoded shared secret as a key and Unix time (epoch) divided into a 30 second interval as inputs. The resulting 80-byte token is converted to a 40-character hexadecimal string, the least significant (last) hex digit is then used to calculate a 0-15 offset. The offset is then used to read the next 8 hex digits from the offset. The resulting 8 hex digits are then AND’d with 0x7FFFFFFF (2,147,483,647), then the modulo of the resultant integer and 1,000,000 is calculated, which produces the correct code for that 30 seconds period. Base32 encoding and decoding were covered in my previous Tech Tip titled Base32 Encoding And Decoding With iRules . The Tech Tip details the process for decoding a user’s base32-encoded key to binary as well as converting a binary key to base32. The HMAC-SHA256 token calculation iRule was originally submitted by Nat to the Codeshare on DevCentral. The iRule was slightly modified to support the SHA-1 algorithm, but is otherwise taken directly from the pseudocode outlined in RFC 2104. These two pieces of code contribute the bulk of the processing of the Google Authenticator code. The rest is done with simple bitwise and arithmetic functions. Google Authenticator Two-Factor Authentication Process Installing Google Authenticator Two-Factor Authentication The installation of Google Authenticator two-factor authentication on your BIG-IP is divided into six sections: creating an LDAP authentication configuration, configuring an LDAP (Active Directory) authentication profile, testing your authentication profile, adding the Google Authenticator iRule and “user_to_google_auth” mapping data group, attaching iRule to the authentication profile, and finally generating soft tokens for your users. The process is broken out into steps as trying to complete all the sections in tandem can be difficult to troubleshoot. Creating An LDAP (Active Directory) Authentication Configuration The LDAP profile we will configure will be extremely basic: no SSL, no Active Directory, etc. A detailed walkthrough for more advanced deployments can be found in our best practices guide: Configuring LDAP remote authentication for Active Directory . 1. Login to your BIG-IP using administrator credentials 2. Navigate to Local Traffic > Profiles > Authentication > Configurations 3. Click “Create” in the upper right-hand corner 4. Select “LDAP” from the “Type” drop-down menu 5. Now fill in the fields with your environment-specific values: Name: ldap.f5test.local Type: LDAP Remote LDAP Tree: dc=f5test, dc=local Host(s): <IP address(es) of LDAP server(s)> Service Port: 389 (default) LDAP Version: 3 (default) Bind DN: cn=ldap_bind_acct, dc=f5test, dc=local (if your LDAP server allows anonymous binds you may not need this option) Bind Password: <admin password> Confirm Bind Password: <admin password> 6. Click “Finished” to save the configuration Configuring An LDAP (Active Directory) Authentication Profile 1. Navigate to Local Traffic > Profiles > Authentication > Profiles 2. Click “Create” in the upper right-hand corner 3. Select “LDAP” from the “Type” drop-down menu 4. Fill in fields with appropriate values: Name: ldap.f5test.local Type: LDAP Configuation: ldap.f5test.local (select previously named configuration from drop-down) Rule: (leave this unchecked and not enabled for now, but this is where we will enable the Google Authenticator iRule shortly) 5. Click “Finished” Test Your Authentication Profile 1. Create a basic HTTP virtual server with your LDAP authentication profile enabled on the virtual 2. Access your virtual from a web browser and you should be prompted with an HTTP Basic Authentication credential form 3. Test with known-working credentials, if everything works you’re good to go, if not you’ll need to troubleshoot the authentication issue Adding the Google Authenticator iRule 1. Go to the DevCentral Codeshare and download the Google Authenticator iRule 2. Navigate to Local Traffic > iRules > iRule List 3. Click “Create” in the upper right-hand corner 4. Name your iRule “google_authenticator_plus_ldap_two_factor” and paste the iRule into “Definition” section 5. Click “Finished” when you’re done Attaching The Google Authenticator iRule To Your Authentication Profile 1. Go back to the “Authentication Profile” section by browsing to Local Traffic > Profiles > Authentication > Profiles 2. Select your LDAP profile from the list 3. Now attach select the “google_authenticator_plus_ldap_two_factor” iRule from the “Rule” drop-down 4. Click “Finished” Generating Software Tokens For Users In addition to the Google Authenticator iRule we also wrote a Google Authenticator Soft Token Generator iRule that will generate soft tokens for your users. The iRule can be added directly to an HTTP virtual server without a a pool and accessed directly to create tokens. There are a few available fields in the generator: account, pre-defined secret, and a QR code option. The “account” field defines how to label the soft token within the user’s mobile device and can be useful if the user has multiple soft token on the same device (I have 3 and need to label them to keep them straight). A 10-byte string can be used as a pre-defined secret for conversion to a base32-encoded key. We will advise you against using a pre-defined key because a key known to the user is something they know (as opposed to something they have) and could be potentially regenerate out-of-band thereby nullifying the benefits of two-factor authentication. Lastly, there is an option to generate a QR code by sending an HTTPS request to Google and returning the QR code as an image. While this is convenient, this could be seen as insecure since it may wind up in Google’s logs somewhere. You’ll have to decide if that is a risk you’re willing to take for the convenience it provides. Once the token has been generated, it will need to be added to a data group on the BIG-IP: 1. Navigate to Local Traffic > iRules > Data Group Lists 2. Select “Create” from the upper right-hand corner if the data group does not yet exist. If it exists, just select it from the list. 3. Name the data group “user_to_google_auth” (data group name can be changed in the RULE_INIT section of the Google Authenticator iRule) 4. The type of data group will be “string” 5. Type the “username” into the “string” field and paste the “Google Authenticator key” into the “value” field 6. Click “Add” and you the username/key pair should appear in the list as such: user := ONSWG4TFOQYTEMZU 7. Click “Finished” when all your username/key pairs have been added. Your user can scan the QR code or type it into their device manually. After they scan the QR code, the account name should appear along with the TOTP for the account. The image below is how the soft token appears in the Google Authenticator iPhone application: Once again, do not let the user leave with a copy of the plain text key. Knowing their key value will negate the value of having the token in the first place. Once the key has been added to the BIG-IP, the user’s device, and they’ve tested their access, destroy any reference to the key outside the BIG-IPs data group.If you’re worried about having the keys in plain text on the BIG-IP, they can be encrypted with AES or stored off-box in LDAP and only queried via secure connection. This is beyond the scope of this article, but doable with iRules. Testing and Troubleshooting There are a lot of moving pieces in this iRule so troubleshooting can be a bit daunting at first glance, but because all of the pieces can be separated into their constituents the problem is usually identified quickly. There are five pieces that make up this solution: the LDAP service, the BIG-IP LDAP profile, the Google Authenticator iRule, the “user_to_google_auth” mapping data group, and finally the soft token. Try to separate them from each other to expedite the troubleshooting process. Here are a few helpful hints in troubleshooting potential issues: 1. Are all the clocks synchronized? The BIG-IP and LDAP server can be tested from the command line by running ‘ntpdate –q pool.ntp.org’. If the clocks are more than a few milliseconds off, they’ll need to be adjusted. An NTP server should be configured for all devices. Likewise the user’s mobile device must be configured to use network time or else the calculated value will always be wrong. Remember that timezones do not matter when using Unix time. 2. Is basic LDAP working without the iRule attached? Before ever touching any of the Google Authenticator related iRules, data groups, devices, etc. your LDAP configuration should be in working order. If you’re having problems finding the issue, enable “debug logging” at the bottom of the LDAP authentication configuration page on your BIG-IP and tail the logs on your LDAP server. Revisit the best practices guide if you are still unsure about any configuration options. 3. Turn on (or increase) logging for Google Authenticator iRule. In the RULE_INIT section of the Google Authenticator iRule, there is a debug logging option. Set it to ‘2’ and all actions from the iRule will be logged to /var/log/ltm. If you see one particular area that is consistently hanging, investigate it further. Conclusion With every passing day system security becomes a greater concern. Today’s attacks are far more sophisticated and costly than those of days past. With all the stories of stolen laptops and other devices in the field, it is a little easier to sleep as a systems administrator knowing that a tech-aware thief has one more hurdle to surpass in an effort to compromise your infrastructure. The implementation costs of deploying two-factor authentication with Google Authenticator in an existing F5 infrastructure are very low assuming your employees have company-issued mobile devices. The cost can be deduced to the man hours required to install this iRule and generate tokens for your users. The cost is almost certainly less than that of a single incident of a compromise account. Until next time, batten down the hatches and get that two-factor project underway that’s been on the backburner for two years. Code and References Google Authenticator iRule – Documentation and code for the iRule used in this Tech Tip Google Authenticator Soft Token Generator iRule – iRule for generating soft tokens for users RFC 4226 - HOTP: An HMAC-Based One-Time Password Algorithm RFC 2104 - HMAC: Keyed-Hashing for Message Authentication RFC 4648 - The Base16, Base32, and Base64 Data Encodings SOL11072 - Configuring LDAP remote authentication for Active Directory7.1KViews1like12CommentsMonitoring TCP Applications #01
LTM has built-in application health monitor templates for many TCP-based application protocols (FTP, HTTP, HTTPS, IMAP, LDAP, MSSQL, NNTP, POP3, RADIUS, RTSP, RPC, SASP, SIP, SMB, SMTP, SOAP). If you need to monitor an application which depends on an upper layer protocol for which there is not a built-in monitor template, LTM provides a number of options to build a monitor based on the underlying transport layer protocol-- TCP. I'll cover each of those options in a separate article, starting here with the built-in "tcp" and "tcp_half_open" monitor types. Overview: tcp and tcp_half_open Both monitor types attempt to verify the availability of a service by making a TCP connection on the appropriate port. There are only a couple of differences between the tcp and the tcp_half_open monitors: monitor type reverse/transparent connection handling transact with service? tcp ECV yes (optional) full open, full close yes (optional) tcp_half_open EAV no half open, RST close no Both have the same standard monitor configuration options of interval, timeout, and alias address/port (for more on those options, and on reverse & transparent options, see the LTM manual section on Configuring Monitors.) As you will see below, some of the differences are significant and may dictate which monitor is most appropriate for your application. Monitor Type "tcp" The tcp monitor is useful for a couple of different scenarios: Monitoring services that you can't transact with, but want to verify the availability of the socket and close the connection properly (routers, firewalls); or Monitoring services with which you can transact a quick request/response in cleartext after the TCP handshake to verify service availability (telnet is abasic example, but the same concept applies to any other text-based protocol). How it works In summary, a monitor of type tcp attempts to send and/or receive specific content over a TCP connection. The check is successful when the server response contains the Receive String value. A tcp monitor may optionally be configured with a Send String value and a Receive String value. If the Send String value is blank and a connection is successfully established, the service is considered up. A blank Receive String value matches any response. The default tcp monitor, with no Send string or Receive string configured tests a service by establishing a TCP connection with the pool member on the configured service port and then immediately closing the connection without sending any data on the connection. This causes some services such as telnet and ssh to log a connection error, filling up the server logs with unnecessary errors. To eliminate the extraneous logging, you can configure the tcp monitor to send enough data to the service to make it happy, or just use the tcp_half_open monitor. Depending on your monitoring requirements, you may also be able to monitor a service that expects empty connections, such as tcp_echo (by using the default tcp_echo monitor) or daytime (specifying the appropriate alias service port when customizing the tcp monitor template). Here are the details of a tcp monitor in action, including the option for sending data and evaluating the response: 1. The tcp monitor will perform a normal 3-way TCP handshake. 2. If no Send string is configured, the pool member will be marked UP upon successful completion of the 3-way handshake. If a Send string is configured, it will be sent to the server. 3. If the server fails to respond before the timeout, the pool member is marked DOWN. If the server does respond before the timeout, the server response is compared with the Receive string: If no Receive string is configured, the pool member is marked UP; if a Receive string is configured, and the response contains the Receive string, the pool member will be marked UP. If the response does not contain the Receive string, the pool member will be marked DOWN. 4. If the server resets the connection during the handshake or before an expected response is received, the pool member will be marked DOWN and the connection is torn down immediately. In all other cases, the connection will be closed with a normal 4-way close. handshake successful? | | no yes | | DOWN send string configured/sent? | | no yes | | UP server response? | | | close no yes | | DOWN recv string configured? | | | close no yes | | UP recv string match response? | | | close no yes | | DOWN UP | | close close Monitor Type "tcp_half_open" The tcp_half_open monitor is most widely used for gateway monitoring when you just need to ensure the socket is responding to connection requests and desire the lowest overhead on the monitoring target. For example, a busy router would be less impacted by a half open connection request that is immediately reset than a connection that completes the entire open and close handshake sequence. (Although this approach minimizes the impact of monitoring on the monitoring target, it's important to know that the tcp_half_open monitor uses more of LTM's memory than the tcp monitor does, since the tcp_half_open monitor is an EAV that runs a small script outside of TMM, while the tcp monitor is an ECV internal to TMM.) Another common use for the tcp_half_open monitor is to prevent the application from spewing a bunch of log messages indicating connections were opened but not used. For example, one consultant recently told me he uses the tcp_half_open monitor to verify sshd is alive and answering without filling up /var/log/secure. Telnet has similar issues with connections on which no data is sent. It should be noted that some applications cannot gracefully handle the half open connection and subsequent reset, so some testing may be in order before implementing this monitor. How it works The tcp_half_open monitor sends a SYN packet to the pool member, and if a SYN-ACK is received from the server in response, the pool member is marked UP. SYN sent | SYN/ACK rec'd? | | no yes | | DOWN** UP | RST sent **Not fully functional in some versions: SOL7362: The BIG-IP tcp_half_open monitor does not mark the service as DOWN after receiving a RST packet from the pool member More info LTM manual: Configuring Monitors Get the Flash Player to see this player.6.6KViews0likes2CommentsOne Time Passwords via an SMS Gateway with BIG-IP Access Policy Manager
One time passwords, or OTP, are used (as the name indicates) for a single session or transaction. The plus side is a more secure deployment, the downside is two-fold—first, most solutions involve a token system, which is costly in management, dollars, and complexity, and second, people are lousy at remembering things, so a delivery system for that OTP is necessary. The exercise in this tech tip is to employ BIG-IP APM to generate the OTP and pass it to the user via an SMS Gateway, eliminating the need for a token creating server/security appliance while reducing cost and complexity. Getting Started This guide was developed by F5er Per Boe utilizing the newly released BIG-IP version 10.2.1. The “-secure” option for the mcget command is new in this version and is required in one of the steps for this solution. Also, this solution uses the Clickatell SMS Gateway to deliver the OTPs. Their API is documented at http://www.clickatell.com/downloads/http/Clickatell_HTTP.pdf. Other gateway providers with a web-based API could easily be substituted. Also, there are steps at the tail end of this guide to utilize the BIG-IP’s built-in mail capabilities to email the OTP during testing in lieu of SMS. The process in delivering the OTP is shown in Figure 1. First a request is made to the BIG-IP APM. The policy is configured to authenticate the user’s phone number in Active Directory, and if successful, generate a OTP and pass along to the SMS via the HTTP API. The user will then use the OTP to enter into the form updated by APM before allowing the user through to the server resources. BIG-IP APM Configuration Before configuring the policy, an access profile needs to be created, as do a couple authentication servers. First, let’s look at the authentication servers Authentication Servers To create servers used by BIG-IP APM, navigate to Access Policy->AAA Servers and then click create. This profile is simple, supply your domain server, domain name, and admin username and password as shown in Figure 2. The other authentication server is for the SMS Gateway, and since it is an HTTP API we’re using, we need the HTTP type server as shown in Figure 3. Note that the hidden form values highlighted in red will come from your Clickatell account information. Also note that the form method is GET, the form action references the Clickatell API interface, and that the match type is set to look for a specific string. The Clickatell SMS Gateway expects the following format: https://api.clickatell.com/http/sendmsg?api_id=xxxx&user=xxxx&password=xxxx&to=xxxx&text=xxxx Finally, successful logon detection value highlighted in red at the bottom of Figure 3 should be modified to response code returned from SMS Gateway. Now that the authentication servers are configured, let’s take a look at the access profile and create the policy. Access Profile & Policy Before we can create the policy, we need an access profile, shown below in Figure 4 with all default settings. Now that that is done, we click on Edit under the Access Policy column highlighted in red in Figure 5. The default policy is bare bones, or as some call it, empty. We’ll work our way through the objects, taking screen captures as we go and making notes as necessary. To add an object, just click the “+” sign after the Start flag. The first object we’ll add is a Logon Page as shown in Figure 6. No modifications are necessary here, so you can just click save. Next, we’ll configure the Active Directory authentication, so we’ll add an AD Auth object. Only setting here in Figure 7 is selecting the server we created earlier. Following the AD Auth object, we need to add an AD Query object on the AD Auth successful branch as shown in Figures 8 and 9. The server is selected in the properties tab, and then we create an expression in the branch rules tab. To create the expression, click change, and then select the Advanced tab. The expression used in this AD Query branch rule: expr { [mcget {session.ad.last.attr.mobile}] != "" } Next we add an iRule Event object to the AD Query OK branch that will generate the one time password and provide logging. Figure 10 Shows the iRule Event object configuration. The iRule referenced by this event is below. The logging is there for troubleshooting purposes, and should probably be disabled in production. 1: when ACCESS_POLICY_AGENT_EVENT { 2: expr srand([clock clicks]) 3: set otp [string range [format "%08d" [expr int(rand() * 1e9)]] 1 6 ] 4: set mail [ACCESS::session data get "session.ad.last.attr.mail"] 5: set mobile [ACCESS::session data get "session.ad.last.attr.mobile"] 6: set logstring mail,$mail,otp,$otp,mobile,$mobile 7: ACCESS::session data set session.user.otp.pw $otp 8: ACCESS::session data set session.user.otp.mobile $mobile 9: ACCESS::session data set session.user.otp.username [ACCESS::session data get "session.logon.last.username"] 10: log local0.alert "Event [ACCESS::policy agent_id] Log $logstring" 11: } 12: 13: when ACCESS_POLICY_COMPLETED { 14: log local0.alert "Result: [ACCESS::policy result]" 15: } On the fallback path of the iRule Event object, add a Variable Assign object as show in Figure 10b. Note that the first assignment should be set to secure as indicated in image with the [S]. The expressions in Figure 10b are: session.logon.last.password = expr { [mcget {session.user.otp.pw}]} session.logon.last.username = expr { [mcget {session.user.otp.mobile}]} On the fallback path of the AD Query object, add a Message Box object as shown in Figure 11 to alert the user if no mobile number is configured in Active Directory. On the fallback path of the Event OTP object, we need to add the HTTP Auth object. This is where the SMS Gateway we configured in the authentication server is referenced. It is shown in Figure 12. On the fallback path of the HTTP Auth object, we need to add a Message Box as shown in Figure 13 to communicate the error to the client. On the Successful branch of the HTTP Auth object, we need to add a Variable Assign object to store the username. A simple expression and a unique name for this variable object is all that is changed. This is shown in Figure 14. On the fallback branch of the Username Variable Assign object, we’ll configure the OTP Logon page, which requires a Logon Page object (shown in Figure 15). I haven’t mentioned it yet, but the name field of all these objects isn’t a required change, but adding information specific to the object helps with readability. On this form, only one entry field is required, the one time password, so the second password field (enabled by default) is set to none and the initial username field is changed to password. The Input field below is changed to reflect the type of logon to better queue the user. Finally, we’ll finish off with an Empty Action object where we’ll insert an expression to verify the OTP. The name is configured in properties and the expression in the branch rules, as shown in Figures 16 and 17. Again, you’ll want to click advanced on the branch rules to enter the expression. The expression used in the branch rules above is: expr { [mcget {session.user.otp.pw}] == [mcget -secure {session.logon.last.otp}] } Note again that the –secure option is only available in version 10.2.1 forward. Now that we’re done adding objects to the policy, one final step is to click on the Deny following the OK branch of the OTP Verify Empty Action object and change it from Deny to Allow. Figure 18 shows how it should look in the visual policy editor window. Now that the policy is completed, we can attach the access profile to the virtual server and test it out, as can be seen in Figures 19 and 20 below. Email Option If during testing you’d rather send emails than utilize the SMS Gateway, then configure your BIG-IP for mail support (Solution 3664), keep the Logging object, lose the HTTP Auth object, and configure the system with this script to listen for the messages sent to /var/log/ltm from the configured Logging object: #!/bin/bash while true do tail -n0 -f /var/log/ltm | while read line do var2=`echo $line | grep otp | awk -F'[,]' '{ print $2 }'` var3=`echo $line | grep otp | awk -F'[,]' '{ print $3 }'` var4=`echo $line | grep otp | awk -F'[,]' '{ print $4 }'` if [ "$var3" = "otp" -a -n "$var4" ]; then echo Sending pin $var4 to $var2 echo One Time Password is $var4 | mail -s $var4 $var2 fi done done The log messages look like this: Jan 26 13:37:24 local/bigip1 notice apd[4118]: 01490113:5: b94f603a: session.user.otp.log is mail,user1@home.local,otp,609819,mobile,12345678 The output from the script as configured looks like this: [root@bigip1:Active] config # ./otp_mail.sh Sending pin 239272 to user1@home.local Conclusion The BIG-IP APM is an incredibly powerful tool to add to the LTM toolbox. Whether using the mail system or an SMS gateway, you can take a bite out of your infrastructure complexity by using this solution to eliminate the need for a token management service. Many thanks again to F5er Per Boe for this excellent solution!6.4KViews0likes23CommentsHTTPS SNI Monitoring How-to
Hi, You may or may not already have encountered a webserver that requires the SNI (Server Name Indication) extension in order to know which website it needs to serve you. It comes down to "if you don't tell me what you want, I'll give you a default website or even simply reset the connection". A typical IIS8.5 will do this, even with the 'Require SNI' checkbox unchecked. So you have your F5, with its HTTPS monitors. Those monitors do not yet support SNI, as they have no means of specifying the hostname you want to use for SNI. In comes a litle script, that will do exactly that. Here's a few quick steps to get you started: Download the script from this article (it's posted on pastebin: http://pastebin.com/hQWnkbMg). Import it under 'System' > 'File Management' > 'External Monitor Program File List'. Create a monitor of type 'External' and select the script from the picklist under 'External Program'. Add your specific variables (explanation below). Add the monitor to a pool and you are good to go. A quick explanation of the variables: METHOD (GET, POST, HEAD, OPTIONS, etc. - defaults to 'GET') URI ("the part after the hostname" - defaults to '/') HTTPSTATUS (the status code you want to receive from the server - defaults to '200') HOSTNAME (the hostname to be used for SNI and the Host Header - defaults to the IP of the node being targetted) TARGETIP and TARGETPORT (same functionality as the 'alias' fields in the original monitors - defaults to the IP of the node being targetted and port 443) DEBUG (set to 0 for nothing, set to 1 for logs in /var/log/ltm - defaults to '0') RECEIVESTRING (the string that needs to be present in the server response - default is empty, so not checked) HEADERX (replace the X by a number between 1 and 50, the value for this is a valid HTTP header line, i.e. "User-Agent: Mozilla" - no defaults) EXITSTATUS (set to 0 to make the monitor always mark te pool members as up; it's fairly useless, but hey... - defaults to 1) There is a small thing you need to know though: due to the nature of the openssl binary (more specifically the s_client), we are presented with a "stdin redirection problem". The bottom line is that your F5 cannot be "slow" and by slow I mean that if it requires more than 3 seconds to pipe a string into openssl s_client, the script will always fail. This limit is defined in the variable "monitor_stdin_sleeptime" and defaults to '3'. You can set it to something else by adding a variable named 'STDIN_SLEEPTIME' and giving it a value. From my experience, anything above 3 stalls the "F5 script executer", anything below 2 is too fast for openssl to read the request from stdin, effectively sending nothing and thus yielding 'down'. When you enable debugging (DEBUG=1), you can see what I mean for yourself: no more log entries for the script when STDIN_SLEEPTIME is set too high; always down when you set it too low. I hope this script is useful for you, Kind regards, Thomas Schockaert6.1KViews0likes22CommentsLTM: Configuring IP Forwarding
A basic change in internal routing architecture and functionality between BIG-IP 4.x and LTM 9.x has caused some confusion for customers whose v4.x deployment depended on IP forwarding. Here is an explanation of the change, and the new configuration requirements to support forwarding of IP traffic using LTM. What changed? Both BIG-IP and LTM are default deny devices, which means a specific configuration is required to support every desired traffic flow. In BIG-IP, packets not matching a virtual server or SNAT/NAT would be dropped, unless the BIG-IP v4.x global IP forwarding checkbox feature was enabled. With IP forwarding enabled, packets not matching a virtual or SNAT/NAT would be forwarded intact per the routing table entries. LTM also requires that all traffic must match a defined TMM listener (a virtual server, SNAT or NAT) or be dropped. However, LTM's full application proxy architecture separates routing intelligence from load balancing, and the deprecated IP forwarding feature was intentionally not included in LTM to optimize load balancing performance. The IP forwarding checkbox feature was deprecated early in the BIG-IP 4.x tree. Although F5 has long recommended that IP forwarding be replaced with forwarding virtual servers, forwarding pools, SNATs or NATs, some customers retained their IP forwarding configuration when upgrading to LTM v9.x. Since those various configuration options exist to support traffic previously managed by IP forwarding, the One-Time Conversion Utility (OTCU) that translates v4 configurations to v9 syntax does not presume to configure global forwarding virtual servers in place of global IP forwarding. For those customers and other administrators already familiar with BIG-IP but now using LTM, it isn't obvious how to replicate the forwarding behaviour they require. Configuring forwarding for LTM The recommended replacement for global IP forwarding is a forwarding virtual server configured to listen for all IP protocols, all addresses and all ports on all VLANs. This virtual server would catch all traffic not matching another listener and forward in accordance with LTM's routing table entries. You can configure a wildcard forwarding virtual server that listens for all IP protocols, all addresses and all ports on all VLANs. 1. In the LTM GUI, browse to Virtual Servers & click "Create". 2. Configure the following properties: Destination: Network Address=0.0.0.0 Mask=0.0.0.0 Service port: 0 Type: Forwarding (IP) Protocol: *All Protocols VLAN Traffic: All VLANs 3. Click "Finish" to create the virtual server. The resulting configuration snip looks like this: virtual forward_vs { ip forward destination any:any mask none } This will forward all IP traffic as long as there is a matching route in the routing table. (Packets bound for destinations for which there is no route will be dropped with no ICMP notification.) Commonly required modifications You can limit forwarding to only traffic bound for specific subnets by specifying the appropriate subnet and mask. If a different router exists on any directly connected network, you may need to create a custom fastL4 profile with "Loose Initiation" & "Loose Close" enabled to prevent LTM from interfering with forwarded conversations traversing an asymmetrical path. If the forwarding virtual server is intended to allow outbound access for your privately addresses servers, you will need to configure a SNAT to translate the source address of that traffic to a publicly routable address. If you have multiple gateways, you can load balance requests between the routers. To do so, first create a gateway pool containing the routers as members. Then configure the virtual server as above, but selecting Type "Performance (Layer 4)" instead of "Forwarding (IP)", and applying the gateway pool as its resource. Related information SOL7229: Methods of gaining administrative access to nodes through the BIG-IP system If you only need to forward administrative traffic to your servers, and no other forwarding is required, there are several additional options for that detailed in this solution. SOL473: Advantages and disadvantages of using IP forwarding This is an old solution that summarizes the pros and cons of BIG-IP 4.x IP forwarding. I only suggest reading it now to highlight the fact that LTM's approach retains the advantages and overcomes the disadvantages mentioned therein. Get the Flash Player to see this player.5.3KViews0likes5CommentsLTM: HTTP Monitoring with POST Request
I recently discovered that you can use LTM's built in HTTP monitor template to send a POST request to a server to verify its health. It's surprisingly straightforward in comparison with the external monitor approach I've used in the past. This article will cover the details you'll need to properly implement an HTTP POST monitor using the built in monitor template. The built in HTTP monitor template The HTTP monitor template provides a way to send a single request to the server, monitor the response for an expected string, and mark the pool member down if the expected response isn't seen within the configured timeout. The options of interest for this exercise are the Send string and the Receive string. The Send string is where the request is configured. It should include the method (in this case, POST), the URI to request from the server, the HTTP version, some standard HTTP headers (including Basic Auth, if applicable), and in this case, the POST body. The server should respond in a predictable way to this request each time it is issued. The Receive string should include a phrase or non-dictionary word that will appear only in that expected response. The Receive string can use basic regular expressions to accommodate per-server or other minor variations in the expected response. Capturing the the transaction The first thing you'll need to do, as with any monitor, is to identify and capture the transaction you'd like to replicate in order to validate the health of your application servers. You can run tcpdump to watch live production traffic and capture the target request and response, but it's safer to run an isolated test request at the command line to more exactly replicate the action of the LTM monitor. The approach I recommend uses both techniques. First capture a trace of production traffic so you have a known-good working example with all the expected headers. Then log into the LTM command line and use telnet or cURL to send the same request to the server along with the observed headers, and monitor the response, adjusting the request until the expected response is received. Here's the cURL syntax you'll need to use for command line request submission: curl -N -H <headers> -u <user:password> -d <post body> http://<pool_mmeber_IP>:<pool_mmeber_port><URI> You'll need to examine the actual data that's being passed, including whitespace characters. A packet trace can be used to look directly at the conversation on the wire. Capture on the server-facing interface. To examine only production traffic, filter for the IP/port of the pool member, excluding the non-floating self IP on the server-facing VLAN: tcpdump -nni <server_vlan> -Xs 0 host <pool_member_IP> and port <pool_member_port> and not host <internal_non-floating_selfIP> (Filtering out the non-floating self IP prevents monitor traffic from appearing in the trace.) To examine your command line request, filter instead for the non-floating self IP on the server-facing VLAN and the IP/port of the pool member: tcpdump -nni <server_vlan> -Xs 0 host <internal_non-floating_selfIP> and host <pool_member_IP> and port <pool_member_port> Here's a sample transaction trace for a SOAP/xml request: Once you have identified the transaction and validated the required headers, you are ready to build your custom monitor. Creating the custom HTTP monitor The information you've gathered so far should be everything you need to configure the custom HTTP monitor. Start by creating a monitor of type "http" in the LTM GUI (Local Traffic -> Monitors -> Create). Configure Interval and Timeout as appropriate for your application. Set Username and Password if Basic Auth credentials are required by the server. Then you'll need to dig into your transaction trace data to construct the Send string containing all the request details. For the Send string, the HTTP method must be specified along with the URI and HTTP version v1.1 on the same line, followed by an encoded LF/CR (\r\n), then the Host header (required for HTTP 1.1). For the trace above, we'll start with: POST /XAIApp/xaiserver HTTP/1.1\r\nHost: 191.168.1.1:4200 Add the other headers, all separated by a single encoded CR/LF. (Line breaks are only for display. No literal carriage returns are inserted into the Send string.) Include the Basic Auth header, if applicable, even though it is specified elsewhere in the monitor template. With headers, the Send string now looks like this: POST /XAIApp/xaiserver HTTP/1.1\r\nHost: 191.168.1.1:4200\r\n Authorization: Basic c3lzdXNlcjpzeXN1c2VyMDA= \r\n Pragma: no-cache\r\nAccept: */*\r\nContent-Length: 673\r\nContent-Type: application/x-www-form-urlencoded Finally add the POST body, preceded by a double CR/LF to indicate the end of the headers. If the POST body contains double quotes or braces, you will need to escape them with a preceding backslash so they are interpreted correctly by LTM. The completed Send string now looks like this: POST /XAIApp/xaiserver HTTP/1.1\r\nHost: 191.168.1.1:4200\r\n Authorization: Basic c3lzdXNlcjpzeXN1c2VyMDA= \r\n Pragma: no-cache\r\nAccept: */*\r\nContent-Length: 673\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n <SOAP-ENV:Envelope xmlns:SOAP-ENV=\"urn:schemas-xmlsoap-org:envelope\"> <SOAP-ENV:Body><SSvcInstallationOptionsMaintenance dateTimeTagFormat=\"CdxDateTime\" transactionType =\"READ\" ><SSvcInstallationOptionsMaintenanceService> <SSvcInstallationOptionsMaintenanceHeader InstallationOptionID=\"1\"/><SSvcInstallationOptionsMaintenanceDetails InstallationOptionID=\"1\"> <InstallMsg><InstallMsgHeader InstallationOptionID=\"1\"/></InstallMsg><InstallAlg><InstallAlgHeader InstallationOptionID=\"1\"/></InstallAlg> <Module></Module></SSvcInstallationOptionsMaintenanceDetails></SSvcInstallationOptionsMaintenanceService></SSvcInstallationOptionsMaintenance> </SOAP-ENV:Body></SOAP-ENV:Envelope> To complete the monitor definition, enter the phrase or string expected in a healthy server response into the Receive string field. For the trace above, we'll use: Standard automatic payment creation The resulting monitor configuration will look like this: monitor http-post-soapenv { defaults from http interval 10 timeout 31 password "password" recv "Standard automatic payment creation" send "POST /XAIApp/xaiserver HTTP/1.1\r\nHost: 191.168.1.1:4200\r\nAuthorization: Basic c3lzdXNlcjpzeXN1c2VyMDA= \r\n Pragma: no-cache\r\nAccept: */*\r\n Content-Length: 673\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n <SOAP-ENV:Envelope xmlns:SOAP-ENV=\"urn:schemas-xmlsoap-org:envelope\"> <SOAP-ENV:Body><SSvcInstallationOptionsMaintenance dateTimeTagFormat=\"CdxDateTime\" transactionType =\"READ\" ><SSvcInstallationOptionsMaintenanceService> <SSvcInstallationOptionsMaintenanceHeader InstallationOptionID=\"1\"/><SSvcInstallationOptionsMaintenanceDetails InstallationOptionID=\"1\"> <InstallMsg><InstallMsgHeader InstallationOptionID=\"1\"/></InstallMsg><InstallAlg><InstallAlgHeader InstallationOptionID=\"1\"/></InstallAlg> <Module></Module></SSvcInstallationOptionsMaintenanceDetails></SSvcInstallationOptionsMaintenanceService></SSvcInstallationOptionsMaintenance> </SOAP-ENV:Body></SOAP-ENV:Envelope>" username "user" } Related information These 2 previous articles are specific to external monitors, but contain some more useful details about command line testing and capturing traffic: LTM External Monitors: The Basics LTM External Monitors: Troubleshooting4.5KViews0likes2CommentsHTTP Basic Access Authentication iRule Style
I started working on an administrator control panel for my previous Small URL Generator Tech Tip (part 1 and part 2) and realized that we probably didn’t want to make our Small URL statistics and controls viewable by everyone. That led me to make a decision as to how to secure this information. Why not the old, but venerable HTTP basic access authentication? It’s simple, albeit somewhat insecure, but it works well and is easy to deploy. First, a little background on how this mechanism works: First, a user places a GET request for a page without providing authentication. The server then responds with a “401 Authorization Required” status code and the authentication type and realm specified by the WWW-Authenticate header. In our case we will be using basic access authentication and we’ve arbitrarily named our realm “Secured Area". The user will then provide a username and password. The client will then join the username and password with a colon and apply base-64 encoding to the concatenated string. The client then presents this to the server via the Authorization header. If the credentials are verified by the server, the client will then be granted access to the “Secured Area”. Authentication Required for “Secured Area” This transaction normally takes place between the client and application server, but we can emulate this functionality using an iRule. The mechanism is rather simple and easy to implement. We need to look for the client Authorization header and if present, verify the credentials. If the credentials are valid, grant access to the virtual server’s content, if not, display the authentication box and then repeat the process. On the BIG-IP side, we are storing the username and MD5-digested password in a data group (which we aptly named authorized_users). While this is not as secure as a salted hash, it does provide some security beyond storing the credentials in plain text on our BIG-IP. Once we took these elements into consideration, this is the iRule we developed: 1: when HTTP_REQUEST { 2: binary scan [md5 [HTTP::password]] H* password 3: 4: if { [class lookup [HTTP::username] $::authorized_users] equals $password } { 5: log local0. "User [HTTP::username] has been authorized to access virtual server [virtual name]" 6: 7: # Insert iRule-based application code here if necessary 8: } else { 9: if { [string length [HTTP::password]] != 0 } { 10: log local0. "User [HTTP::username] has been denied access to virtual server [virtual name]" 11: } 12: 13: HTTP::respond 401 WWW-Authenticate "Basic realm=\"Secured Area\"" 14: } 15: } There are a couple different ways this iRule could be implemented. It can be applied as-is directly to any HTTP virtual and begin protecting the virtual’s contents. It can also be used to secure an iRule-based application. In which case the application code would need to be encapsulated where the comment is located in the above code. I will be publishing a more functional example of this in the near future, but you can start using it now if you have such a necessity. Earlier we discussed the inherent insecurity of basic access authentication due to the username and password being transmitted in plain text. To combat this, you’ll probably want to conduct this transaction over an SSL secured connection. Additionally you’ll want to generate the passwords as MD5 hashes before adding them to your data group. Here are a few ways to accomplish this: Bash % echo -n "password" | md5sum => 5f4dcc3b5aa765d61d8327deb882cf99 - Perl % perl -e "use Digest::MD5 'md5_hex'; print md5_hex("password") => 5f4dcc3b5aa765d61d8327deb882cf99 Ruby (using irb) irb(main):001:0> require "md5" => true irb(main):002:0> MD5::md5("password") => #<MD5: 5f4dcc3b5aa765d61d8327deb882cf99> While HTTP basic access authentication may not be the best authentication method for every case, it definitely has its advantages. It is easy to deploy (and even easier via an iRule), provides basic authentication without having to configure or depend on an external authentication service, and is supported by any browser developed in the last 15 years. Through the use of different data groups you can easily separate access to different virtuals or provide a simple SSO (Single Sign-On) solution for a number of virtual servers. We hope you find this tidbit of code to be useful in your environment. Stay tuned for more extensive examples and usages of this tech tip in the near future!4.1KViews0likes14CommentsMonitoring Open Source databases with BIG-IP
One of the nice features that was added in the 10.1 release of our BIG-IP software was the addition of native monitors for both PostgreSQL and MySQL. These new monitors allow you to more accurately monitor your databases using specific queries that suit your needs. This is far better than being required to call an external script to do your monitoring or relying only on a TCP monitor to determine if your database is listening on a specific port. In general, there are just a few things you need to do to successfully monitor your database: Create a dedicated database user to use for monitoring. This is not required, but is generally recommended. I personally would never use a root or admin login for monitoring a database. Create your monitor. This step requires you to have a specific piece of information you want to know about in mind -- essentially a SQL query. This can be anything from a simple query that just confirms that the database is up to a complex one that returns information about processes, application status, etc, etc. The sky really is the limit here. Attach your monitor to a node or a pool of nodes. Without further ado, let us jump straight in. Table of Contents 1 Creating a dedicated database user for monitoring. 1.1 Creating a user in PostgreSQL. 1.2 Creating a user in MySQL. 2 Creating your monitor. 3 Attaching your monitor. Creating a dedicated database user for monitoring. This is the only step that is different depending on which database you use. Creating a user in PostgreSQL. Using the command line on your PostgreSQL server, run the following command: $ createuser -U postgres -W -PE bigip You will be prompted to enter a password for the bigip user and asked to reenter it. You will then be asked three questions concerning the bigip user's privileges to the database. I recommend answering No to all of them. Finally, you will be prompted for the password of the user you are connecting to the database as. Here is the breakdown of the command I used above: createuser -- The command we are using. -U postgres -- Connect as the 'postgres' user. -W -- Prompt me for a password for the 'postgres' user. -PE -- Prompt me for a password for the new 'bigip' user and encrypt it. bigip -- The name of the new user. Depending on your setup, you may need to grant your new bigip user access to certain tables, views, or other objects. For information on how to do that, refer to the PostgreSQL documentation for GRANT at http://www.postgresql.org/docs/current/interactive/sql-grant.html In this example, I do not need to grant the bigip user any extra privileges since I will be querying a function that is available to everyone; I will be querying the number of backend processes for a particular database using the pg_stat_get_db_numbackends function. This function, however requires another piece of information, the database OID, in order to run. So, first let us find that OID. $ psql -U bigip -W postgres -c 'SELECT datname,oid FROM pg_database' This should return something like the following: datname | oid -----------+------- template1 | 1 template0 | 11510 postgres | 11511 pgbench | 16853 (4 rows) Good, I have the OID I need. In this case, it is 16853 since the pgbench database is the one I want to monitor. Now, let us run the same query we would with the BIG-IP monitor to make sure we get an expected result: $ psql -U bigip -W postgres -c 'SELECT pg_stat_get_db_numbackends(16853)' This should return something like the following: pg_stat_get_db_numbackends ---------------------------- 7 (1 row) I knew ahead of time, in this example, that the number of backends is 7. So this query just confirmed what the output should be. The only other thing that my be required before continuing on is to add the new user to your pg_hba.conf file. Depending on your environment, an entry for the new user may be required in order to make a remote connection over the network from the self-IP of your BIG-IP to your database server. Find the documentation at http://www.postgresql.org/docs/current/interactive/auth-pg-hba-conf.html. For my configuration, I added the following line: host postgres bigip 10.1.1.251/32 md5 Please note that this file is only read at PostgreSQL startup. If/when you add an entry to your pg_hba.conf file, you will need to restart the main server process via pg_ctl reload or kill -HUP . Now we can continue on to Creating your monitor. Creating a user in MySQL. Using the command line on your MySQL server, run the following command: $ mysql -u root -p This will drop you into the MySQL interactive shell. From there, run the following command: mysql> GRANT SELECT ON database.table TO 'bigip'@'' IDENTIFIED BY ''; In our example, the GRANT statement looked like this: mysql> GRANT SELECT ON test_db.people TO 'bigip'@'10.1.1.251' IDENTIFIED BY 'bigip1'; mysql> exit; Unlike PostgreSQL, this command creates the user, sets its password, and sets its privileges all at once. By default, MySQL users are not allowed access to any information. This GRANT statement tells the database to allow the user bigip to connect from the IPv4 address 10.1.1.251 and run a SELECT statement on the people table in the test_db database. Now, let us run the same query we would with the BIG-IP monitor to make sure we get an expected result: $ mysql -u bigip -p -e "SELECT id FROM test_db.people WHERE first_name='George'" This should return something like the following: +----+ | id | +----+ | 1 | +----+ Which is what I expect. The person with a first name of George in the people table of the test_db database does indeed have an id of 1. Now we can continue on to Creating your monitor. 2 Creating your monitor. Since the UI for creating the monitor for both PostgreSQL and MySQL is identical, I'm just going to step through creating the PostgreSQL one with screenshots. I will note the differences accordingly. Log on to your BIG-IP and navigate the left sidebar to Local Traffic -> Monitors The Monitors UI will display, click Create in the upper right hand corner of the frame. The New Monitor UI will display. In the Name box, type foss-db-pgsql_monitor. From the Type list, select PostgreSQL. The UI should refresh showing you all the options for the monitor. Leave the Interval and Timeout values at their defaults unless you have a specific need to change them. In the Send String box, type the SQL query you want to perform at regular intervals for monitoring. You could use the one we discussed earlier, but it is just an example. In the Receive String box, type the response you expect to get back from your SQL query. The Username and Password boxes should be defined accordingly to the user we created earlier. The Database box shoud be defined to the database you created your new user in. In our example this would be postgres for PostgreSQL or test_db for MySQL. The Receive Row and Receive Column boxes are how you would deal with a SQL query that returns multiple results. If you know that what you are looking for is in a specific spot in a multi-line, multi-column result; this is where you would define it. For example, take the following result from a not-as-simple SQL query: SQL> SELECT id,first_name,last_name FROM test_db.people; +----+------------+------------+ | id | first_name | last_name | +----+------------+------------+ | 1 | George | Washington | | 2 | John | Adams | | 3 | Thomas | Jefferson | +----+------------+------------+ ... and say we want to monitor that Thomas Jefferson's first name is always Thomas. So we would set Receive Row to 3 and Receive Column to 2. Lastly, the Count box allows you to control the behavior of the monitor in terms of how it handles connections. The default of 0 will tell the monitor to keep open monitor connections and reuse them. A setting of 1 will open and close a new connection for each monitor check. Any positive value will keep the monitor connection open for that many instances. Click Finished. 3 Attaching your monitor. Now that we've created our new monitor, it should be available to attach to any node or pool that you wish. Simple find said node or pool and add your new monitor to the list of Active Health Monitors. If all goes as planned, your node or monitor's status should shortly appear in a happy, green Up state. If things are not going well, remember that the database monitors allow you to turn Debug on. Go back to the properties of your new monitor, change Configuration from Basic to Advanced, change Debug from No to Yes, and click Update. Doing this will log debug information to a file on your BIG-IP located at /var/log/DBDaemon.log. Additionally, turning on debug may also create some log files in /var/log that are named according to the monitor type and each node it is attempting to monitor. Check these files for details on failed logins, invalid credentials, denied connections, etc. snip... Good Luck!3.8KViews0likes1CommentBack to Basics: Health Monitors and Load Balancing
#webperf #ado Because every connection counts One of the truisms of architecting highly available systems is that you never, ever want to load balance a request to a system that is down. Therefore, some sort of health (status) monitoring is required. For applications, that means not just pinging the network interface or opening a TCP connection, it means querying the application and verifying that the response is valid. This, obviously, requires the application to respond. And respond often. Best practices suggest determining availability every 5 seconds or so. That means every X seconds the load balancing service is going to open up a connection to the application and make a request. Just like a user would do. That adds load to the application. It consumes network, transport, application and (possibly) database resources. Resources that cannot be used to service customers. While the impact on a single application may appear trivial, it's not. Remember, as load increases performance decreases. And no matter how trivial it may appear, health monitoring is adding load to what may be an already heavily loaded application. But Lori, you may be thinking, you expound on the importance of monitoring and visibility all the time! Are you saying we shouldn't be monitoring applications? Nope, not at all. Visibility is paramount, providing the actionable data necessary to enable highly dynamic, automated operations such as elasticity. Visibility through health-monitoring is a critical means of ensuring availability at both the local and global level. What we may need to do, however, is move from active to passive monitoring. PASSIVE MONITORING Passive monitoring, as the modifier suggests, is not an active process. The Load balancer does not open up connections nor query an application itself. Instead, it snoops on responses being returned to clients and from that infers the current status of the application. For example, if a request for content results in an HTTP error message, the load balancer can determine whether or not the application is available and capable of processing subsequent requests. If the load balancer is a BIG-IP, it can mark the service as "down" and invoke an active monitor to probe the application status as well as retrying the request to another available instance – insuring end-users do not see an error. Passive (inband) monitors are not binary. That is, they aren't simple "on" or "off" based on HTTP status codes. Such monitors can be configured to track the number of failures and evaluate failure rates against a configurable failure interval. When such thresholds are exceeded, the application can then be marked as "down". Passive monitors aren't restricted to availability status, either. They can also monitor for performance (response time). Failure to meet response time expectations results in a failure, and the application continues to be watched for subsequent failures. Passive monitors are, like most inline/inband technologies, transparent. They quietly monitor traffic and act upon that traffic without adding overhead to the process. Passive monitoring gives operations the visibility necessary to enable predictable performance and to meet or exceed user expectations with respect to uptime, without negatively impacting performance or capacity of the applications it is monitoring.2.8KViews1like2Comments