adn
71 TopicsTwo-Factor Authentication With Google Authenticator And APM
Introduction Two-factor authentication (TFA) has been around for many years and the concept far pre-dates computers. The application of a keyed padlock and a combination lock to secure a single point would technically qualify as two-factor authentication: “something you have,” a key, and “something you know,” a combination. Until the past few years, two-factor authentication in its electronic form has been reserved for high security environments: government, banks, large companies, etc. The most common method for implementing a second authentication factor has been to issue every employee a disconnected time-based one-time password hard token. The term “disconnected” refers to the absence of a connection between the token and a central authentication server. A “hard token” implies that the device is purpose-built for authentication and serves no other purpose. A soft or “software” token on the other hand has other uses beyond providing an authentication mechanism. In the context of this article we will refer to mobile devices as a soft tokens. This fits our definition as the device an be used to make phone calls, check email, surf the Internet, all in addition to providing a time-based one-time password. A time-based one-time password (TOTP) is a single use code for authenticating a user. It can be used by itself or to supplement another authentication method. It fits the definition of “something you have” as it cannot be easily duplicated and reused elsewhere. This differs from a username and password combination, which is “something you know,” but could be easily duplicated by someone else. The TOTP uses a shared secret and the current time to calculate a code, which is displayed for the user and regenerated at regular intervals. Because the token and the authentication server are disconnected from each other, the clocks of each must be perfectly in sync. This is accomplished by using Network Time Protocol (NTP) to synchronize the clocks of each device with the correct time of central time servers. Using Google Authenticator as a soft token application makes sense from many angles. It is low cost due to the proliferation of smart phones and is available from the “app store” free of charge on all major platforms. It uses an open standard (defined by RFC 4226), which means that it is well-tested, understood, secure. Calculation as you will later see is well-documented and relatively easy to implement in your language of choice (iRules in our case). This process is explained in the next section. This Tech Tip is a follow-up to Two-Factor Authentication With Google Authenticator And LDAP. The first article in this series highlighted two-factor authentication with Google Authenticator and LDAP on an LTM. In this follow-up, we will be covering implementation of this solution with Access Policy Manager (APM). APM allows for far more granular control of network resources via access policies. Access policies are rule sets, which are intuitively displayed in the UI as flow charts. After creation, an access policy is applied to a virtual server to provide security, authentication services, client inspection, policy enforcement, etc. This article highlights not only a two-factor authentication solution, but also the usage of iRules within APM policies. By combining the extensibility of iRules with the APM’s access policies, we are able to create virtually any functionality we might need. Note: A 10-user fully-featured APM license is included with every LTM license. You do not need to purchase an additional module to use this feature if you have less than 10 users. Calculating The Google Authenticator TOTP The Google Authenticator TOTP is calculated by generating an HMAC-SHA1 token, which uses a 10-byte base32-encoded shared secret as a key and Unix time (epoch) divided into a 30 second interval as inputs. The resulting 80-byte token is converted to a 40-character hexadecimal string, the least significant (last) hex digit is then used to calculate a 0-15 offset. The offset is then used to read the next 8 hex digits from the offset. The resulting 8 hex digits are then AND’d with 0x7FFFFFFF (2,147,483,647), then the modulo of the resultant integer and 1,000,000 is calculated, which produces the correct code for that 30 seconds period. Base32 encoding and decoding were covered in my previous Tech Tip titled Base32 Encoding And Decoding With iRules . The Tech Tip details the process for decoding a user’s base32-encoded key to binary as well as converting a binary key to base32. The HMAC-SHA256 token calculation iRule was originally submitted by Nat to the Codeshare on DevCentral. The iRule was slightly modified to support the SHA-1 algorithm, but is otherwise taken directly from the pseudocode outlined in RFC 2104. These two pieces of code contribute the bulk of the processing of the Google Authenticator code. The rest is done with simple bitwise and arithmetic functions. Triggering iRules From An APM Access Policy Our previously published Google Authenticator iRule combined the functionality of Google Authenticator token verification with LDAP authentication. It was written for a standalone LTM system without the leverage of APM’s Visual Policy Editor. The issue with combining these two authentication factors in a single iRule is that their functionality is not mutually exclusive or easily separable. We can greatly reduce the complexity of our iRule by isolating functionality for Google Authenticator token verification and moving the directory server authentication to the APM access policy. APM iRules differ from those that we typically develop for LTM. iRules assigned to LTM virtual server are triggered by events that occur during connection or payload handling. Many of these events still apply to an LTM virtual server with an APM policy, but do not have perspective into the access policy. This is where we enter the realm of APM iRules. APM iRules are applied to a virtual server exactly like any other iRule, but are triggered by custom iRule event agent IDs within the access policy. When the access policy reaches an iRule event, it will trigger the ACCESS_POLICY_AGENT_EVENT iRule event. Within the iRule we can execute the ACCESS::policy agent_id command to return the iRule event ID that triggered the event. We can then match on this ID string prior to executing any additional code. Within the iRule we can get and set APM session variables with the ACCESS::session command, which will serve as our conduit for transferring variables to and from our access policy. A visual walkthrough of this paragraph is shown below. iRule Trigger Process Create an iRule Event in the Visual Policy Editor Specify a Name for the object and an ID for the Custom iRule Event Agent Create an iRule with the ID referenced and assign it to the virtual server 1: when ACCESS_POLICY_AGENT_EVENT { 2: if { [ACCESS::policy agent_id] eq "ga_code_verify" } { 3: # get APM session variables 4: set username [ACCESS::session data get session.logon.last.username] 5: 6: ### Google Authenticator token verification (code omitted for brevity) ### 7: 8: # set APM session variables 9: ACCESS::session data set session.custom.ga_result $ga_result 10: } 11: } Add branch rules to the iRule Event which read the custom session variable and handle the result Google Authenticator Two-Factor Authentication Process Two-Factor Authentication Access Policy Overview Rather than walking through the entire process of configuring the access policy from scratch, we’ll look at the policy (available for download at the bottom of this Tech Tip) and discuss the flow. The policy has been simplified by creating macros for the redundant portions of the authentication process: Google Authenticator token verification and the two-factor authentication processes for LDAP and Active Directory. The “Google Auth verification” macro consists of an iRule event and 5 branch rules. The number of branch rules could be reduced to just two: success and failure. This would however limit our diagnostic capabilities should we hit a snag during our deployment, so we added logging for all of the potential failure scenarios. Remember that these logs are sent to APM reporting (Web UI: Access Policy > Reports) not /var/log/ltm. APM reporting is designed to provide per-session logging in the user interface without requiring grepping of the log files. The LDAP and Active Directory macros contain the directory server authentication and query mechanisms. Directory server queries are used to retrieve user information from the directory server. In this case we can store our Google Authenticator key (shared secret) in a schema attribute to remove a dependency from our BIG-IP. We do however offer the ability to store the key in a data group as well. The main portion of the access policy is far simpler and easier to read by using macros. When the user first enters our virtual server we look at the Landing URI they are requesting. A first time request will be sent to the “normal” logon page. The user will then input their credentials along with the one-time password provided by the Google Authenticator token. If the user’s credentials and one-time password match, they are allowed access. If they fail the authentication process, we increment a counter via a table in our iRule and redirect them back to an “error” logon page. The “error” logon page notifies them that their credentials are invalid. The notification makes no reference as to which of the two factors they failed. If the user exceeds the allowed number of failures for a specified period of time, their session will be terminated and they will be unable to login for a short period of time. An authenticated user would be allowed access to secured resources for the duration of their session. Deploying Google Authenticator Token Verification This solution requires three components (one optional) for deployment: Sample access policy Google Authenticator token verification iRule Google Authenticator token generation iRule (optional) The process for deploying this solution has been divided into four sections: Configuring a AAA server Login to the Web UI of your APM From the side panel select Access Policy > AAA Servers > Active Directory, then the + next to the text to create a new AD server Within the AD creation form you’ll need to provide a Name, Domain Controller, Domain Name, Admin Username, and Admin Password When you have completed the form click Finished Copy the iRule to BIG-IP and configure options Download a copy of the Google Authenticator Token Verification iRule for APM from the DevCentral CodeShare (hint: this is much easier if you “edit” the wiki page to display the source without the line numbers and formatting) Navigate to Local Traffic > iRules > iRule List and click the + symbol Name the iRule '”google_auth_verify_apm,” then copy and paste the iRule from the CodeShare into the Definition field At the top of the iRule there are a few options that need to be defined: lockout_attempts - number of attempts a user is allowed to make prior to being locked out temporarily (default: 3 attempts) lockout_period - duration of lockout period (default: 30 seconds) ga_code_form_field - name of HTML form field used in the APM logon page, this field is define in the "Logon Page" access policy object (default: ga_code_attempt) ga_key_storage - key storage method for users' Google Authenticator shared keys, valid options include: datagroup, ldap, or ad (default: datagroup) ga_key_ldap_attr - name of LDAP schema attribute containing users' key ga_key_ad_attr - name of Active Directory schema attribute containing users' key ga_key_dg - data group containing user := key mappings Click Finished when you’ve configured the iRule options to your liking Import sample access policy From the Web UI, select Access Policy > Access Profiles > Access Profiles List In the upper right corner, click Import Download the sample policy for Two-Factor Authentication With Google Authenticator And APM and extract the .conf from ZIP archive Fill in the New Profile Name with a name of your choosing, then select Choose File, navigate to the extracted sample policy and Open Click Import to complete the import policy The sample policy’s AAA servers will likely not work in your environment, from the Access Policy List, click Edit next to the imported policy When the Visual Policy Editor opens, expand the macro (LDAP or Active Directory auth) that describe your environment Click the AD Auth object, select the AD server from the drop-down that was defined earlier in the AAA Servers step, then click Save Repeat this process for the AD Query object Assign sample policy and iRule to a virtual server From the Web UI, select Local Traffic > Virtual Servers > Virtual Server List, then the create button (+) In the New Virtual Server form, fill in the Name, Destination address, Service Port (should be HTTPS/443), next select an HTTP profile and anSSL Profile (Client). Next you’ll add a SNAT Profile if needed, an Access Profile, and finally the token verification iRule Depending on your deployment you may want to add a pool or other network connectivity resources Finally click Finished At this point you should have a function virtual server that is serving your access policy. You’ll now need to add some tokens for your users. This process is another section on its own and is listed below. Generating Software Tokens For Users In addition to the Google Authenticator Token Verification iRule for APM we also wrote a Google Authenticator Soft Token Generator iRule that will generate soft tokens for your users. The iRule can be added directly to an HTTP virtual server without a a pool and accessed directly to create tokens. There are a few available fields in the generator: account, pre-defined secret, and a QR code option. The “account” field defines how to label the soft token within the user’s mobile device and can be useful if the user has multiple soft token on the same device (I have 3 and need to label them to keep them straight). A 10-byte string can be used as a pre-defined secret for conversion to a base32-encoded key. We will advise you against using a pre-defined key because a key known to the user is something they know (as opposed to something they have) and could be potentially regenerate out-of-band thereby nullifying the benefits of two-factor authentication. Lastly, there is an option to generate a QR code by sending an HTTPS request to Google and returning the QR code as an image. While this is convenient, this could be seen as insecure since it may wind up in Google’s logs somewhere. You’ll have to decide if that is a risk you’re willing to take for the convenience it provides. Once the token has been generated, it will need to be added to a data group on the BIG-IP: Navigate to Local Traffic > iRules > Data Group Lists Select Create from the upper right-hand corner if the data group does not yet exist. If it exists, just select it from the list. Name the data group “google_auth_keys” (data group name can be changed in the beginning section of the iRule) The type of data group will be String Type the “username” into the String field and paste the “Google Authenticator key” into the Value field Click Add and the username/key pair should appear in the list as such: user := ONSWG4TFOQYTEMZU Click Finished when all your username/key pairs have been added. Your user can scan the QR code or type it into their device manually. After they scan the QR code, the account name should appear along with the TOTP for the account. The image below is how the soft token appears in the Google Authenticator iPhone application: Once again, do not let the user leave with a copy of the plain text key. Knowing their key value will negate the value of having the token in the first place. Once the key has been added to the BIG-IP, the user’s device, and they’ve tested their access, destroy any reference to the key outside the BIG-IPs data group.If you’re worried about having the keys in plain text on the BIG-IP, they can be encrypted with AES or stored off-box in LDAP and only queried via secure connection. This is beyond the scope of this article, but doable with iRules. Code Google Authenticator Token Verification iRule for APM – Documentation and code for the iRule used in this Tech Tip Google Authenticator Soft Token Generator iRule – iRule for generating soft tokens for users Sample Access Policy: Two-Factor Authentication With Google Authenticator And APM – APM access policy Reference Materials RFC 4226 - HOTP: An HMAC-Based One-Time Password Algorithm RFC 2104 - HMAC: Keyed-Hashing for Message Authentication RFC 4648 - The Base16, Base32, and Base64 Data Encodings SOL3122: Configuring the BIG-IP system to use an NTP server using the Configuration utility – Information on configuring time servers Configuration Guide for BIG-IP Access Policy Manager – The “big book” on APM configurations Configuring Authentication Using AAA Servers – Official F5 documentation for configuring AAA servers for APM Troubleshooting AAA Configurations – Extra help if you hit a snag configuring your AAA server15KViews6likes28CommentsTwo-Factor Authentication With Google Authenticator And LDAP
Introduction Earlier this year Google released their time-based one-time password (TOTP) solution named Google Authenticator. A TOTP is a single-use code with a finite lifetime that can be calculated by two parties (client and server) using a shared secret and a synchronized clock (see RFC 4226 for additional information). In the case of Google Authenticator, the TOTP are generated using a software (soft) token on a mobile device. Google currently offers applications for the Apple iPhone, Android-based devices, and Blackberry handsets. A user authenticating with a Google Authenticator-enabled service will require the possession of this software token. In order for the token to be effective, it must not be able to be duplicated and the shared secret should be closely guarded. Google Authenticator’s soft token solution offer a number of advantages over other commercially available solutions. It is free to use (all applications are free to download), the TOTP algorithm is open source, well-known, and well-tested, and finally it does not require a dedicated server for processing tokens. While certain potential weakness in SHA-1 have been identified, none of them can be exploited within the 30-second timeframe of the TOTP’s usability. For all intents and purposes, SHA-1 is reasonably secure, well-tested, and purpose-appropriate for this application. The algorithm however is only as secure as the users and administrators are at protecting the shared secret used in token processing. Calculating The Google Authenticator TOTP The Google Authenticator TOTP is calculated by generating an HMAC-SHA1 token, which uses a 10-byte base32-encoded shared secret as a key and Unix time (epoch) divided into a 30 second interval as inputs. The resulting 80-byte token is converted to a 40-character hexadecimal string, the least significant (last) hex digit is then used to calculate a 0-15 offset. The offset is then used to read the next 8 hex digits from the offset. The resulting 8 hex digits are then AND’d with 0x7FFFFFFF (2,147,483,647), then the modulo of the resultant integer and 1,000,000 is calculated, which produces the correct code for that 30 seconds period. Base32 encoding and decoding were covered in my previous Tech Tip titled Base32 Encoding And Decoding With iRules . The Tech Tip details the process for decoding a user’s base32-encoded key to binary as well as converting a binary key to base32. The HMAC-SHA256 token calculation iRule was originally submitted by Nat to the Codeshare on DevCentral. The iRule was slightly modified to support the SHA-1 algorithm, but is otherwise taken directly from the pseudocode outlined in RFC 2104. These two pieces of code contribute the bulk of the processing of the Google Authenticator code. The rest is done with simple bitwise and arithmetic functions. Google Authenticator Two-Factor Authentication Process Installing Google Authenticator Two-Factor Authentication The installation of Google Authenticator two-factor authentication on your BIG-IP is divided into six sections: creating an LDAP authentication configuration, configuring an LDAP (Active Directory) authentication profile, testing your authentication profile, adding the Google Authenticator iRule and “user_to_google_auth” mapping data group, attaching iRule to the authentication profile, and finally generating soft tokens for your users. The process is broken out into steps as trying to complete all the sections in tandem can be difficult to troubleshoot. Creating An LDAP (Active Directory) Authentication Configuration The LDAP profile we will configure will be extremely basic: no SSL, no Active Directory, etc. A detailed walkthrough for more advanced deployments can be found in our best practices guide: Configuring LDAP remote authentication for Active Directory . 1. Login to your BIG-IP using administrator credentials 2. Navigate to Local Traffic > Profiles > Authentication > Configurations 3. Click “Create” in the upper right-hand corner 4. Select “LDAP” from the “Type” drop-down menu 5. Now fill in the fields with your environment-specific values: Name: ldap.f5test.local Type: LDAP Remote LDAP Tree: dc=f5test, dc=local Host(s): <IP address(es) of LDAP server(s)> Service Port: 389 (default) LDAP Version: 3 (default) Bind DN: cn=ldap_bind_acct, dc=f5test, dc=local (if your LDAP server allows anonymous binds you may not need this option) Bind Password: <admin password> Confirm Bind Password: <admin password> 6. Click “Finished” to save the configuration Configuring An LDAP (Active Directory) Authentication Profile 1. Navigate to Local Traffic > Profiles > Authentication > Profiles 2. Click “Create” in the upper right-hand corner 3. Select “LDAP” from the “Type” drop-down menu 4. Fill in fields with appropriate values: Name: ldap.f5test.local Type: LDAP Configuation: ldap.f5test.local (select previously named configuration from drop-down) Rule: (leave this unchecked and not enabled for now, but this is where we will enable the Google Authenticator iRule shortly) 5. Click “Finished” Test Your Authentication Profile 1. Create a basic HTTP virtual server with your LDAP authentication profile enabled on the virtual 2. Access your virtual from a web browser and you should be prompted with an HTTP Basic Authentication credential form 3. Test with known-working credentials, if everything works you’re good to go, if not you’ll need to troubleshoot the authentication issue Adding the Google Authenticator iRule 1. Go to the DevCentral Codeshare and download the Google Authenticator iRule 2. Navigate to Local Traffic > iRules > iRule List 3. Click “Create” in the upper right-hand corner 4. Name your iRule “google_authenticator_plus_ldap_two_factor” and paste the iRule into “Definition” section 5. Click “Finished” when you’re done Attaching The Google Authenticator iRule To Your Authentication Profile 1. Go back to the “Authentication Profile” section by browsing to Local Traffic > Profiles > Authentication > Profiles 2. Select your LDAP profile from the list 3. Now attach select the “google_authenticator_plus_ldap_two_factor” iRule from the “Rule” drop-down 4. Click “Finished” Generating Software Tokens For Users In addition to the Google Authenticator iRule we also wrote a Google Authenticator Soft Token Generator iRule that will generate soft tokens for your users. The iRule can be added directly to an HTTP virtual server without a a pool and accessed directly to create tokens. There are a few available fields in the generator: account, pre-defined secret, and a QR code option. The “account” field defines how to label the soft token within the user’s mobile device and can be useful if the user has multiple soft token on the same device (I have 3 and need to label them to keep them straight). A 10-byte string can be used as a pre-defined secret for conversion to a base32-encoded key. We will advise you against using a pre-defined key because a key known to the user is something they know (as opposed to something they have) and could be potentially regenerate out-of-band thereby nullifying the benefits of two-factor authentication. Lastly, there is an option to generate a QR code by sending an HTTPS request to Google and returning the QR code as an image. While this is convenient, this could be seen as insecure since it may wind up in Google’s logs somewhere. You’ll have to decide if that is a risk you’re willing to take for the convenience it provides. Once the token has been generated, it will need to be added to a data group on the BIG-IP: 1. Navigate to Local Traffic > iRules > Data Group Lists 2. Select “Create” from the upper right-hand corner if the data group does not yet exist. If it exists, just select it from the list. 3. Name the data group “user_to_google_auth” (data group name can be changed in the RULE_INIT section of the Google Authenticator iRule) 4. The type of data group will be “string” 5. Type the “username” into the “string” field and paste the “Google Authenticator key” into the “value” field 6. Click “Add” and you the username/key pair should appear in the list as such: user := ONSWG4TFOQYTEMZU 7. Click “Finished” when all your username/key pairs have been added. Your user can scan the QR code or type it into their device manually. After they scan the QR code, the account name should appear along with the TOTP for the account. The image below is how the soft token appears in the Google Authenticator iPhone application: Once again, do not let the user leave with a copy of the plain text key. Knowing their key value will negate the value of having the token in the first place. Once the key has been added to the BIG-IP, the user’s device, and they’ve tested their access, destroy any reference to the key outside the BIG-IPs data group.If you’re worried about having the keys in plain text on the BIG-IP, they can be encrypted with AES or stored off-box in LDAP and only queried via secure connection. This is beyond the scope of this article, but doable with iRules. Testing and Troubleshooting There are a lot of moving pieces in this iRule so troubleshooting can be a bit daunting at first glance, but because all of the pieces can be separated into their constituents the problem is usually identified quickly. There are five pieces that make up this solution: the LDAP service, the BIG-IP LDAP profile, the Google Authenticator iRule, the “user_to_google_auth” mapping data group, and finally the soft token. Try to separate them from each other to expedite the troubleshooting process. Here are a few helpful hints in troubleshooting potential issues: 1. Are all the clocks synchronized? The BIG-IP and LDAP server can be tested from the command line by running ‘ntpdate –q pool.ntp.org’. If the clocks are more than a few milliseconds off, they’ll need to be adjusted. An NTP server should be configured for all devices. Likewise the user’s mobile device must be configured to use network time or else the calculated value will always be wrong. Remember that timezones do not matter when using Unix time. 2. Is basic LDAP working without the iRule attached? Before ever touching any of the Google Authenticator related iRules, data groups, devices, etc. your LDAP configuration should be in working order. If you’re having problems finding the issue, enable “debug logging” at the bottom of the LDAP authentication configuration page on your BIG-IP and tail the logs on your LDAP server. Revisit the best practices guide if you are still unsure about any configuration options. 3. Turn on (or increase) logging for Google Authenticator iRule. In the RULE_INIT section of the Google Authenticator iRule, there is a debug logging option. Set it to ‘2’ and all actions from the iRule will be logged to /var/log/ltm. If you see one particular area that is consistently hanging, investigate it further. Conclusion With every passing day system security becomes a greater concern. Today’s attacks are far more sophisticated and costly than those of days past. With all the stories of stolen laptops and other devices in the field, it is a little easier to sleep as a systems administrator knowing that a tech-aware thief has one more hurdle to surpass in an effort to compromise your infrastructure. The implementation costs of deploying two-factor authentication with Google Authenticator in an existing F5 infrastructure are very low assuming your employees have company-issued mobile devices. The cost can be deduced to the man hours required to install this iRule and generate tokens for your users. The cost is almost certainly less than that of a single incident of a compromise account. Until next time, batten down the hatches and get that two-factor project underway that’s been on the backburner for two years. Code and References Google Authenticator iRule – Documentation and code for the iRule used in this Tech Tip Google Authenticator Soft Token Generator iRule – iRule for generating soft tokens for users RFC 4226 - HOTP: An HMAC-Based One-Time Password Algorithm RFC 2104 - HMAC: Keyed-Hashing for Message Authentication RFC 4648 - The Base16, Base32, and Base64 Data Encodings SOL11072 - Configuring LDAP remote authentication for Active Directory7.2KViews1like12CommentsOne Time Passwords via an SMS Gateway with BIG-IP Access Policy Manager
One time passwords, or OTP, are used (as the name indicates) for a single session or transaction. The plus side is a more secure deployment, the downside is two-fold—first, most solutions involve a token system, which is costly in management, dollars, and complexity, and second, people are lousy at remembering things, so a delivery system for that OTP is necessary. The exercise in this tech tip is to employ BIG-IP APM to generate the OTP and pass it to the user via an SMS Gateway, eliminating the need for a token creating server/security appliance while reducing cost and complexity. Getting Started This guide was developed by F5er Per Boe utilizing the newly released BIG-IP version 10.2.1. The “-secure” option for the mcget command is new in this version and is required in one of the steps for this solution. Also, this solution uses the Clickatell SMS Gateway to deliver the OTPs. Their API is documented at http://www.clickatell.com/downloads/http/Clickatell_HTTP.pdf. Other gateway providers with a web-based API could easily be substituted. Also, there are steps at the tail end of this guide to utilize the BIG-IP’s built-in mail capabilities to email the OTP during testing in lieu of SMS. The process in delivering the OTP is shown in Figure 1. First a request is made to the BIG-IP APM. The policy is configured to authenticate the user’s phone number in Active Directory, and if successful, generate a OTP and pass along to the SMS via the HTTP API. The user will then use the OTP to enter into the form updated by APM before allowing the user through to the server resources. BIG-IP APM Configuration Before configuring the policy, an access profile needs to be created, as do a couple authentication servers. First, let’s look at the authentication servers Authentication Servers To create servers used by BIG-IP APM, navigate to Access Policy->AAA Servers and then click create. This profile is simple, supply your domain server, domain name, and admin username and password as shown in Figure 2. The other authentication server is for the SMS Gateway, and since it is an HTTP API we’re using, we need the HTTP type server as shown in Figure 3. Note that the hidden form values highlighted in red will come from your Clickatell account information. Also note that the form method is GET, the form action references the Clickatell API interface, and that the match type is set to look for a specific string. The Clickatell SMS Gateway expects the following format: https://api.clickatell.com/http/sendmsg?api_id=xxxx&user=xxxx&password=xxxx&to=xxxx&text=xxxx Finally, successful logon detection value highlighted in red at the bottom of Figure 3 should be modified to response code returned from SMS Gateway. Now that the authentication servers are configured, let’s take a look at the access profile and create the policy. Access Profile & Policy Before we can create the policy, we need an access profile, shown below in Figure 4 with all default settings. Now that that is done, we click on Edit under the Access Policy column highlighted in red in Figure 5. The default policy is bare bones, or as some call it, empty. We’ll work our way through the objects, taking screen captures as we go and making notes as necessary. To add an object, just click the “+” sign after the Start flag. The first object we’ll add is a Logon Page as shown in Figure 6. No modifications are necessary here, so you can just click save. Next, we’ll configure the Active Directory authentication, so we’ll add an AD Auth object. Only setting here in Figure 7 is selecting the server we created earlier. Following the AD Auth object, we need to add an AD Query object on the AD Auth successful branch as shown in Figures 8 and 9. The server is selected in the properties tab, and then we create an expression in the branch rules tab. To create the expression, click change, and then select the Advanced tab. The expression used in this AD Query branch rule: expr { [mcget {session.ad.last.attr.mobile}] != "" } Next we add an iRule Event object to the AD Query OK branch that will generate the one time password and provide logging. Figure 10 Shows the iRule Event object configuration. The iRule referenced by this event is below. The logging is there for troubleshooting purposes, and should probably be disabled in production. 1: when ACCESS_POLICY_AGENT_EVENT { 2: expr srand([clock clicks]) 3: set otp [string range [format "%08d" [expr int(rand() * 1e9)]] 1 6 ] 4: set mail [ACCESS::session data get "session.ad.last.attr.mail"] 5: set mobile [ACCESS::session data get "session.ad.last.attr.mobile"] 6: set logstring mail,$mail,otp,$otp,mobile,$mobile 7: ACCESS::session data set session.user.otp.pw $otp 8: ACCESS::session data set session.user.otp.mobile $mobile 9: ACCESS::session data set session.user.otp.username [ACCESS::session data get "session.logon.last.username"] 10: log local0.alert "Event [ACCESS::policy agent_id] Log $logstring" 11: } 12: 13: when ACCESS_POLICY_COMPLETED { 14: log local0.alert "Result: [ACCESS::policy result]" 15: } On the fallback path of the iRule Event object, add a Variable Assign object as show in Figure 10b. Note that the first assignment should be set to secure as indicated in image with the [S]. The expressions in Figure 10b are: session.logon.last.password = expr { [mcget {session.user.otp.pw}]} session.logon.last.username = expr { [mcget {session.user.otp.mobile}]} On the fallback path of the AD Query object, add a Message Box object as shown in Figure 11 to alert the user if no mobile number is configured in Active Directory. On the fallback path of the Event OTP object, we need to add the HTTP Auth object. This is where the SMS Gateway we configured in the authentication server is referenced. It is shown in Figure 12. On the fallback path of the HTTP Auth object, we need to add a Message Box as shown in Figure 13 to communicate the error to the client. On the Successful branch of the HTTP Auth object, we need to add a Variable Assign object to store the username. A simple expression and a unique name for this variable object is all that is changed. This is shown in Figure 14. On the fallback branch of the Username Variable Assign object, we’ll configure the OTP Logon page, which requires a Logon Page object (shown in Figure 15). I haven’t mentioned it yet, but the name field of all these objects isn’t a required change, but adding information specific to the object helps with readability. On this form, only one entry field is required, the one time password, so the second password field (enabled by default) is set to none and the initial username field is changed to password. The Input field below is changed to reflect the type of logon to better queue the user. Finally, we’ll finish off with an Empty Action object where we’ll insert an expression to verify the OTP. The name is configured in properties and the expression in the branch rules, as shown in Figures 16 and 17. Again, you’ll want to click advanced on the branch rules to enter the expression. The expression used in the branch rules above is: expr { [mcget {session.user.otp.pw}] == [mcget -secure {session.logon.last.otp}] } Note again that the –secure option is only available in version 10.2.1 forward. Now that we’re done adding objects to the policy, one final step is to click on the Deny following the OK branch of the OTP Verify Empty Action object and change it from Deny to Allow. Figure 18 shows how it should look in the visual policy editor window. Now that the policy is completed, we can attach the access profile to the virtual server and test it out, as can be seen in Figures 19 and 20 below. Email Option If during testing you’d rather send emails than utilize the SMS Gateway, then configure your BIG-IP for mail support (Solution 3664), keep the Logging object, lose the HTTP Auth object, and configure the system with this script to listen for the messages sent to /var/log/ltm from the configured Logging object: #!/bin/bash while true do tail -n0 -f /var/log/ltm | while read line do var2=`echo $line | grep otp | awk -F'[,]' '{ print $2 }'` var3=`echo $line | grep otp | awk -F'[,]' '{ print $3 }'` var4=`echo $line | grep otp | awk -F'[,]' '{ print $4 }'` if [ "$var3" = "otp" -a -n "$var4" ]; then echo Sending pin $var4 to $var2 echo One Time Password is $var4 | mail -s $var4 $var2 fi done done The log messages look like this: Jan 26 13:37:24 local/bigip1 notice apd[4118]: 01490113:5: b94f603a: session.user.otp.log is mail,user1@home.local,otp,609819,mobile,12345678 The output from the script as configured looks like this: [root@bigip1:Active] config # ./otp_mail.sh Sending pin 239272 to user1@home.local Conclusion The BIG-IP APM is an incredibly powerful tool to add to the LTM toolbox. Whether using the mail system or an SMS gateway, you can take a bite out of your infrastructure complexity by using this solution to eliminate the need for a token management service. Many thanks again to F5er Per Boe for this excellent solution!6.4KViews0likes23CommentsLTM: Configuring IP Forwarding
A basic change in internal routing architecture and functionality between BIG-IP 4.x and LTM 9.x has caused some confusion for customers whose v4.x deployment depended on IP forwarding. Here is an explanation of the change, and the new configuration requirements to support forwarding of IP traffic using LTM. What changed? Both BIG-IP and LTM are default deny devices, which means a specific configuration is required to support every desired traffic flow. In BIG-IP, packets not matching a virtual server or SNAT/NAT would be dropped, unless the BIG-IP v4.x global IP forwarding checkbox feature was enabled. With IP forwarding enabled, packets not matching a virtual or SNAT/NAT would be forwarded intact per the routing table entries. LTM also requires that all traffic must match a defined TMM listener (a virtual server, SNAT or NAT) or be dropped. However, LTM's full application proxy architecture separates routing intelligence from load balancing, and the deprecated IP forwarding feature was intentionally not included in LTM to optimize load balancing performance. The IP forwarding checkbox feature was deprecated early in the BIG-IP 4.x tree. Although F5 has long recommended that IP forwarding be replaced with forwarding virtual servers, forwarding pools, SNATs or NATs, some customers retained their IP forwarding configuration when upgrading to LTM v9.x. Since those various configuration options exist to support traffic previously managed by IP forwarding, the One-Time Conversion Utility (OTCU) that translates v4 configurations to v9 syntax does not presume to configure global forwarding virtual servers in place of global IP forwarding. For those customers and other administrators already familiar with BIG-IP but now using LTM, it isn't obvious how to replicate the forwarding behaviour they require. Configuring forwarding for LTM The recommended replacement for global IP forwarding is a forwarding virtual server configured to listen for all IP protocols, all addresses and all ports on all VLANs. This virtual server would catch all traffic not matching another listener and forward in accordance with LTM's routing table entries. You can configure a wildcard forwarding virtual server that listens for all IP protocols, all addresses and all ports on all VLANs. 1. In the LTM GUI, browse to Virtual Servers & click "Create". 2. Configure the following properties: Destination: Network Address=0.0.0.0 Mask=0.0.0.0 Service port: 0 Type: Forwarding (IP) Protocol: *All Protocols VLAN Traffic: All VLANs 3. Click "Finish" to create the virtual server. The resulting configuration snip looks like this: virtual forward_vs { ip forward destination any:any mask none } This will forward all IP traffic as long as there is a matching route in the routing table. (Packets bound for destinations for which there is no route will be dropped with no ICMP notification.) Commonly required modifications You can limit forwarding to only traffic bound for specific subnets by specifying the appropriate subnet and mask. If a different router exists on any directly connected network, you may need to create a custom fastL4 profile with "Loose Initiation" & "Loose Close" enabled to prevent LTM from interfering with forwarded conversations traversing an asymmetrical path. If the forwarding virtual server is intended to allow outbound access for your privately addresses servers, you will need to configure a SNAT to translate the source address of that traffic to a publicly routable address. If you have multiple gateways, you can load balance requests between the routers. To do so, first create a gateway pool containing the routers as members. Then configure the virtual server as above, but selecting Type "Performance (Layer 4)" instead of "Forwarding (IP)", and applying the gateway pool as its resource. Related information SOL7229: Methods of gaining administrative access to nodes through the BIG-IP system If you only need to forward administrative traffic to your servers, and no other forwarding is required, there are several additional options for that detailed in this solution. SOL473: Advantages and disadvantages of using IP forwarding This is an old solution that summarizes the pros and cons of BIG-IP 4.x IP forwarding. I only suggest reading it now to highlight the fact that LTM's approach retains the advantages and overcomes the disadvantages mentioned therein. Get the Flash Player to see this player.5.3KViews0likes5CommentsImplementing HTTP Strict Transport Security in iRules
Last month I ran across a blog entry by Extreme Geekboy discussing a patch (now in the most recent nightly forthcoming 4.0 builds) for Firefox he submitted that implements the user agent components of HTTP Strict Transport Security. Strict Transport Security, or HSTS (or STS if that extra character is taxing to type) is an internet draft that allows site owners to specify https as the only acceptable means of accessing the site. This is accomplished by the site inserting a header that the browser will evaluate and for x number of seconds (specified in the header) will rewrite all requests, either from the user or returned in a link from the site to https. This first part is good, but is only half of the implementation. If you are under a man-in-the-middle attack, it matters not if your data is encrypted because the attacker has the keys and is quite happy to decrypt your session unbeknownst to you. This is where the second half of the draft comes in. It disallows the use of untrusted certificates (self-signed, untrusted-CA signed, etc). Any link to an untrusted destination should result in an error in the browser. The goals of the draft are to thwart passive and active network attackers as well as imperfect web developers. It does not address phishing or malware. For details on the threat vectors, read section 2.3 of the draft. Implementation of this draft is actually quite trivial. To get there, I’ll walk you configuring your own certificate authority for use in testing, a BIG-IP (Don’t have one? Get the VE trial!), and a server. All this testing for me is completely contained on my laptop, utilizing Joe’s excellent article on laptop load balancing configuration with LTM VE and VMware, though full-disclosure: I deployed Apache instead of IIS. Working with Certificates I’ve worked with certificates on windows and linux, but for this go I’ll create the certificate authority on my linux virtual machine and prepare the certificates. Many have mad cli skills with the openssl command switches, but I do not. So I’m a big fan of the CA.pl script for working with certificates, which hides a lot of the magic. Make a directory a copy a couple tools into it for testing (my Ubuntu system file locations, ymmv) jrahm@jrahm-dev:~$ mkdir catest jrahm@jrahm-dev:~$ cd catest jrahm@jrahm-dev:~/catest$ cp /usr/lib/ssl/misc/CA.pl . jrahm@jrahm-dev:~/catest$ cp /usr/lib/ssl/openssl.cnf . Create the certificate authority. Questions are pretty self explanatory, make sure the common name is the name you want the CA to be referenced as. jrahm@jrahm-dev:~/catest$ ./CA.pl –newca Create the certificate and sign in. Similar questions to the CA process. Common name should be the name of your site. In my case, this is test.testco.com jrahm@jrahm-dev:~/catest$ ./CA.pl -newreq jrahm@jrahm-dev:~/catest$ ./CA.pl –sign Export the root certificate to Windows compatible format (had to use the openssl command for this one) jrahm@jrahm-dev:~/catest$ openssl x509 -in cacert.pem -outform DER -out ca.der Copy the files to the desktop (using pscp) C:\Users\jrahm>pscp jrahm@10.10.20.200:/home/jrahm/catest/*.pem . C:\Users\jrahm>pscp jrahm@10.10.20.200:/home/jrahm/catest/demoCA/ca.der . Install the root certificate in Windows Install the test.testco.com key and certificate to BIG-IP Create the SSL Profile for CA-signed certificate Create a self-signed certificate in BIG-IP for host test.testco.com Create an additional clientssl profile for the self-signed certificate Preparing the BIG-IP Configuration To test this properly we need four virtual servers, a single pool, and a couple iRules. The first two virtuals are for the “good” site and support the standard ports for http and https. The second two virtuals are for the “bad” site and this site will represent our man-in-the-middle attacker. The iRules support a response rewrite on the good site http virtual (as recommended in the draft), and the insert of the HSTS header on the https virtual only (as required by the draft). Not specified in the draft is the appropriate length for the max-age. I’m adding logic to expire the max-age a day in advance of the certificate expiration date, but you can set a static length of time. I read on one blog that a user was setting it for 50 years. It’s not necessary in my example, but I’m setting the includeSubDomains as well, so that will instruct browsers to securely request and link from test.testco.com and any subdomains of this site (ie, my.test.testco.com). 1: ### iRule for HSTS HTTP Virtuals ### 2: # 3: when HTTP_REQUEST { 4: HTTP::respond 301 Location "https://[HTTP::host][HTTP::uri]" 5: } 6: 7: ### iRule for HSTS HTTPS Virtuals ### 8: # 9: when RULE_INIT { 10: set static::expires [clock scan 20110926] 11: } 12: when HTTP_RESPONSE { 13: HTTP::header insert Strict-Transport-Security "max-age=[expr {$static::expires - [clock seconds]}]; includeSubDomains" 14: } HSTS & MITM Virtuals ### "Good" Virtuals ### # virtual testco_http-vip { snat automap pool testco-pool destination 10.10.20.111:http ip protocol tcp rules hsts_redirect profiles { http {} tcp {} } } virtual testco_https-vip { snat automap pool testco-pool destination 10.10.20.111:https ip protocol tcp rules hsts_insert profiles { http {} tcp {} testco_clientssl { clientside } } } ### "Bad" Virtuals ### # virtual testco2_http-vip { snat automap pool testco-pool destination 10.10.20.112:http ip protocol tcp profiles { http {} tcp {} } } virtual testco2_https-vip { snat automap pool testco-pool destination 10.10.20.112:https ip protocol tcp profiles { http {} tcp {} testco-bad_clientssl { clientside } } } The Results I got the expected results on both Firefox 4.0 and Chrome. Once I switched the virtual from the known good site to the bad site, both browsers presented error pages that I could not click through. HSTS TestResults Great! Where is it Supported? Support already exists in the latest releases of Google Chrome, and if you use the NoScript add-on for current Firefox releases you have support as well. As mentioned above in the introductory section, Firefox 4.0 will support it as well when it releases. Conclusion HTTP Strict Transport Security is a promising development in thwarting some attack vectors between client and server, and is a simple yet effective deployment in iRules. One additional thing worth mentioning is the ability on the user agent (browser or browser add-on) to “seed” known HSTS servers. This would provide additional protection against the initial http connection users might make before redirecting to the https site where the STS header is delivered. Section 12.2 of the draft discusses the bootstrap vulnerability without the seeds in place prior to the first connection to the specified site. Technorati Tags: F5 DevCentral,HTTP Strict Transport Security,HSTS,STS,MITM Attack,Jason Rahm Strict+Transport+Security" rel="tag">HTTP Strict Transport Security,HSTS,STS,MITM Attack,Jason Rahm4.8KViews0likes5CommentsImplementing The Exponential Backoff Algorithm To Thwart Dictionary Attacks
Introduction Recently there was a forum post regarding using the exponential backoff algorithm to prevent or at the very least slow down dictionary attacks. A dictionary attack is when a perpetrator attacks a weak system or application by cycling through a common list of username and password combinations. If were to leave a machine connected Internet with SSH open for any length of time, it wouldn’t take long for an attacker to come along and start hammering the machine. He’ll go through his list until he either cracks an account, gets blocked, or hits the bottom of his list. The attacker has a distinct advantage when he can send unabated requests to the system or application he is attacking. The purpose of the exponential backoff algorithm is to increase the time between subsequent login attempts exponentially. Under this scenario, a normal user wouldn’t be able to type or navigate faster than the minimum lockout period and probably has a very low likelihood of ever hitting the limit. In contrast, if someone was to make a number of repetitive requests in a small timeframe, the time he would be locked out would rise exponentially. Exponential Backoff Algorithm The exponential backoff algorithm is mathematically rather simple. The lockout period is calculated by raising 2 to the power of the number of previous attempts made, subtracting 1, then dividing by two. The equation looks like this: The effect of this calculation is that the timeout for the lockout period is small for the first series of attempts, but rise very quickly given a burst of attempts. If we assemble a table and a plot of previous attempts vs. lockout period, the accumulation becomes apparent with each subsequent attempt doubling the lockout period. If an attacker were to hit an application with 20 attempts in a short window, they would be locked out almost indefinitely or at least to the max lockout period, which we’ll discuss shortly. Attempts Lockout (s) Lockout (h, m, s) 1 0 0s 2 2 2s 3 4 4s 4 8 8s 5 16 16s 6 32 32s 7 64 1m 4s 8 128 2m 8s 9 256 4m 16s 10 512 8m 32s 11 1024 17m 4s 12 2048 34m 8s 13 4096 1h 8m 16s 14 8192 2h 16m 32s 15 16384 4h 33m 4s Previous Attempts vs. Lockout Period Calculating Integer Powers of 2 A number of standard TCL math functions are disabled in iRules because of their ability to consume immense CPU resources. While this does protect the average iRule developer from shooting himself in the leg with them, it limits the ability to perform more complex operations. One function in particular would make implementing the exponential backoff algorithm much easier: pow(). The pow() provides the ability to perform exponentiation or raising a number (the base) to the power of another (the exponent). While we would have needed to write code to perform this functionality for bases larger than 2, it is actually a rather easy operation using an arithmetic shift (left in this case). In TCL (like many other modern languages) uses the << operator to perform a left shift (multiplication by 2) and the >> operator to employ right shift (division by 2). This works because all of the potential lockout periods will be a geometric sequence of integer powers of 2. Take a look at the effect of a left shift on integer powers of two when represented as a binary number (padding added to represent an 8-bit integer): Binary number Decimal number TCL left shift (tclsh) 0 0 0 0 0 0 0 1 1 % expr {1 << 0} => 1 0 0 0 0 0 0 1 0 2 % expr {1 << 1} => 2 0 0 0 0 0 1 0 0 4 % expr {1 << 2} => 4 0 0 0 0 1 0 0 0 8 % expr {1 << 3} => 8 0 0 0 1 0 0 0 0 16 % expr {1 << 4} => 16 0 0 1 0 0 0 0 0 32 % expr {1 << 5} => 32 0 1 0 0 0 0 0 0 64 % expr {1 << 6} => 64 1 0 0 0 0 0 0 0 128 % expr {1 << 7} => 128 Even if the power function were available, a bitwise operation is almost certainly the most efficient way to perform this calculation. Sometimes the most obvious answer is not necessarily the most efficient. Had we not ran into this small barrier, this solution probably would not have emerged. Check out the link below for a complete list of available math functions, operators, and expressions in iRules. List of TCL math function available in iRules Implementing the Algorithm in iRules Now that we know how to calculate integer powers of 2 using an arithmetic shift, the rest of the equation implementation should be straightforward. Once we replace pow() function with a left shift, we get an equation that looks as such: set new_lockout [expr (((1 << $prev_attempts)-1)/2)] Now when we run through a geometric series in TCL we’ll get “almost” the number in the tables above, but they’ll all have a value of one less than expected because we always divide an odd numerator resulting in a float that is truncated when converted to an integer. When this process takes place, the digits after the decimal place are truncated and only the integer portion remains. Given a random distribution of floats truncated to integers there would normally be an even distribution of those rounded “correctly” and “incorrectly.” However in this series all of the solutions end with a decimal value of .500000 and are therefore rounded “incorrectly”. Previous attempts Calculated (float) Calculated (integer) 0 0 0 1 0.5 0 2 1.5 1 3 3.5 3 4 7.5 7 5 15.5 15 We could use the equation listed above, but our numbers would not line up with our projections. In order to get more accurate numbers and save additional CPU cycles, we can further reduce the equations to this: Or as TCL like this: set new_lockout [expr (1 << ($prev_attempts-1))] Now we’ve got something that is super fast and serves our purposes. It would be virtually impossible to overload a box with this simple operation. This is a far more efficient and elegant solution than the originally proposed power function. Maximums, Minimums, and State The benefit of the exponential backoff algorithm is that it increased exponentially when probed repeatedly, but this is also the downside. As the timeout grows exponentially you can potentially lock out the user permanently and quickly fill the memory allocated for a 32-bit integer. The maximum value of a 32-bit integer that can be stored in iRules is 2,147,483,647, which equates to 68 years, 1 month, and change (far longer than a BIG-IP will be in service). For this reason, we’ll want to set a maximum lockout period for a user so that we don’t exceed the memory allocation or lock a user out permanently. We recommend a maximum lockout period of anything from an hour (3600s) to a day (86,400s) for most use cases. The maximum lockout period is defined by the static max_lockout variable in the RULE_INIT event. On the flipside, you’ll notice that there is a case where we get a lockout period of zero for the first request, which will cause a timing issue for the iRule. Therefore we need to establish some minimum for the lockout period. During our tests we found that 2 seconds works well for normal browsing behaviors. You may however decide that you never want anyone submitting faster than every 10 seconds and you’d like the added benefit of the exponential backup, therefore you would change the static min_lockout value in RULE_INIT to 10 (seconds). Lastly, we use the session table to record the number of previous attempts and the lockout period. We define the state table name as a static variable in the CLIENT_ACCEPTED event and use a unique session identifier consisting of the client’s IP and source port to track each session’s behavior. Once we receive a POST request, we’ll increment the previous attempts counter and the calculate a new timeout (lockout period) for the table entry. Once enough time has passed, the entry will timeout in the session table and the client may submit another POST request without restriction. The Exponential Backoff iRule Once we bring all those concepts together, we arrive at something like the iRule listed below. When applied to an HTTP virtual server, the exponential backoff iRule will count the POST requests and prevent users from firing them off two quickly. If a user or bot issues two tightly coupled POST requests they will be locked out temporarily and receive an HTTP response advising them to slow down. If they continue to probe the virtual server, they will be locked out for the next 24 hours on their 18th attempt. 1: when RULE_INIT { 2: set static::min_lockout 2 3: set static::max_lockout 86400 4: set static::debug 1 5: } 6: 7: when CLIENT_ACCEPTED { 8: set static::session_id "[IP::remote_addr]:[TCP::remote_port]" 9: set static::state_table "[virtual name]-exp-backoff-state" 10: } 11: 12: when HTTP_REQUEST { 13: if { [HTTP::method] eq "POST" } { 14: set prev_attempts [table lookup -subtable $static::state_table $static::session_id] 15: 16: if { $prev_attempts eq "" } { set prev_attempts 0 } 17: 18: # exponential backoff - http://en.wikipedia.org/wiki/Exponential_backoff 19: set new_lockout [expr (1 << ($prev_attempts-1))] 20: 21: if { $new_lockout > $static::max_lockout } { 22: set new_lockout $static::max_lockout 23: } elseif { $new_lockout < $static::min_lockout } { 24: set new_lockout $static::min_lockout 25: } 26: 27: table incr -subtable $static::state_table $static::session_id 28: table timeout -subtable $static::state_table $static::session_id $new_lockout 29: 30: if { $static::debug > 0 } { 31: log local0. "POST request (#[expr ($prev_attempts+1)]) from $static::session_id received during lockout period, updating lockout to ${new_lockout}s" 32: } 33: 34: if { $prev_attempts > 1 } { 35: # alternatively respond with content - http://devcentral.f5.com/s/wiki/iRules.HTTP__respond.ashx 36: set response "<html><head><title>Hold up there!</title></head><body><center><h1>Hold up there!</h1><p>You're" 37: append response " posting too quickly. Wait a few moments are try again.</p></body></html>" 38: 39: HTTP::respond 200 content $response 40: } 41: } 42: } CodeShare: Exponential Backoff iRule Conclusion The exponential backoff algorithm provides a great method for thwarting attacks that rely on heavy volume of traffic directed at a system or application. Even if an attacker were to discover the minimum lockout period, they would still be greatly slowed in their attack and would likely move on to easier targets. Protecting an application or system is similar to locking up a bike in many ways. Complete impenetrable security is a difficult (and some would say impossible) endeavor. We can however implement a heavy gauge U-lock accompanied by a thick cable to protect the frame and other various expensive components. A perpetrator would have to expend an inordinate amount of energy to compromise our bike (or system) relative to other targets. The greater the difference in effort (given a similar reward), the more likely it will be that the attacker will move on to easier targets. Until next time, happy coding.4.5KViews0likes2CommentsPersisting SSL Connections
Many customers use LTM to handle SSL encrypted traffic, and traffic that requires SSL certificate authentication and encryption often also requires persistence to a specific server for the life of an application session. LTM is capable of meeting most security requirements for traffic encryption with the 3 most common high-level SSL configurations: SSL Offloading, SSL Re-encryption, and SSL Pass-through. The available persistence options vary depending on which SSL configuration is implemented. In this article, I’ll briefly describe each mode, and the persistence options available for each. SSL Offloading LTM is offloading SSL (decrypting SSL and using a cleartext connection to the real server) if you have only a clientssl profile configured on your virtual server. This configuration is the recommended option if your application requires persistence and cleartext between LTM and the servers is acceptable, since it is most optimal and offers the most flexibility as far as persistence is concerned. SSL offloading is most optimal because it allows LTM to do the heavy lifting of encryption on the client side while completely eliminating any overhead of encryption on the server side. At the same time, it's most flexible regarding persistence options: All of the persistence options available for unencrypted traffic are available when LTM decrypts the conversation: Source Address: Also known as simple persistence, source address affinity directs requests to the same server based solely on the source IP address of a packet. Destination Address: Also known as sticky persistence, destination address affinity directs session requests to the same server based solely on the destination IP address of a packet. Cookies (for HTTP only): Cookie persistence uses an HTTP cookie stored on a client's computer to allow the client to reconnect to the same server previously visited. Hash: Hash persistence allows you use an iRule to create a persistence hash based on any persistent request data. MSRDP: Microsoft Remote Desktop Protocol (MSRDP) persistence tracks sessions between clients and servers running the Microsoft ® Remote Desktop Protocol (RDP) service. SIP: Session Initiation Protocol (SIP) persistence uses th SIP CallID to track the servers to which messages beloging to the same session are sent. (SIP is a protocol that enables real-time messaging, voice, data, and video.) SSL: SSL persistence is persistence option specifically intended for use with non-terminated SSL sessions, and tracks the server to which connectins shoud be sent using the SSL session ID. Universal: Universal persistence allows you to write an iRule expression that defines what to persist on in a request, and can use nearly any persistent request information to track sessions: Protocol headers, HTTP cookies, URI parameters, session IDs in the data stream, etc. The most protocol or application-specific persistence option available is recommended. (It’s worth noting that whatever persistence option is optimal for the unencrypted version of your application should also be optimal when offloading SSL to LTM.) For HTTP applications, some form of Cookie persistence is our most common recommendation, with Simple or Universal persistence as options if cookies are not supported by the expected client base. You may have noticed that SSL persistence didn’t make the list. In fact, it isn’t recommended unless it’s the only available option, for reasons explained below in the section about SSL Pass-through. SSL Re-encryption LTM is re-encrypting SSL (decrypting SSL and re-encrypting over the connection to the real server) if you have both a clientssl and serverssl profile configured on your virtual server. This configuration is the recommended option if your application requires persistence on session data but must also be encrypted between LTM and the servers. An optimized SSL handshake and intelligent keep-alives for connections with the real servers still allows LTM to lighten the load on the servers even though they still have to perform encryption/decryption tasks. As with SSL offloading, all of the persistence options available for unencrypted traffic are available when LTM decrypts the conversation, and the most protocol or application-specific persistence option available is recommended. SSL Pass-through LTM is performing SSL pass-through (neither decrypting nor re-encrypting SSL, instead forwarding the SSL handshake and connection directly to the real server) if you have neither a clientssl or serverssl profile configured on your SSL virtual server. This configuration is the recommended option only if your application cannot tolerate SSL proxying or decryption is not an option. For SSL Pass-through configurations, the persistence options are severely limited: Since LTM is not decrypting the conversation, only the non-SSL-encrypted information in the session is available for use as a session identifier. The primary pieces of persistent unencrypted information in an encrypted SSL flow are the source and destination IP addresses, and the SSL session ID itself, so only Source Address, Destination Address, or SSL persistence will work with SSL Pass-through configurations. Our recommendation, as with SSL offloading or re-encryption, is still to choose persistent token data closest to the application, so in this case, SSL is the preferred persistence method for SSL Pass-through. SSL persistence is intended to track non-terminated SSL sessions using the SSL session ID. Using this lower level data rather than actual application session identifiers such as sessionIDs or cookies is less reliable since SSL IDs are subject to re-negotiation or re-use during the course of an application session outside the application’s control or awareness. However, it’s the best information available, so we recommend setting SSL persistence as the primary persistence method, then set Source Address as a backup persistence method to stick new connections to the same server even if the SSL session ID changes mid-application session. (Note: Users behind large mega-proxies such as AOL may move from one proxy to another during the same application session, thus changing their source IP address to another within a very large address block. If your application will be serving users behind a large megaproxy, be sure to set the persistence mask for Source Address persistence to encompass the entire range of possible alternate addresses.) Get the Flash Player to see this player. 20080805-PersistingSSLConnections.mp33.9KViews0likes5CommentsMultiple Certs, One VIP: TLS Server Name Indication via iRules
An age old question that we’ve seen time and time again in the iRules forums here on DevCentral is “How can I use iRules to manage multiple SSL certs on one VIP"?”. The answer has always historically been “I’m sorry, you can’t.”. The reasoning is sound. One VIP, one cert, that’s how it’s always been. You can’t do anything with the connection until the handshake is established and decryption is done on the LTM. We’d like to help, but we just really can’t. That is…until now. The TLS protocol has somewhat recently provided the ability to pass a “desired servername” as a value in the originating SSL handshake. Finally we have what we’ve been looking for, a way to add contextual server info during the handshake, thereby allowing us to say “cert x is for domain x” and “cert y is for domain y”. Known to us mortals as "Server Name Indication" or SNI (hence the title), this functionality is paramount for a device like the LTM that can regularly benefit from hosting multiple certs on a single IP. We should be able to pull out this information and choose an appropriate SSL profile now, with a cert that corresponds to the servername value that was sent. Now all we need is some logic to make this happen. Lucky for us, one of the many bright minds in the DevCentral community has whipped up an iRule to show how you can finally tackle this challenge head on. Because Joel Moses, the shrewd mind and DevCentral MVP behind this example has already done a solid write up I’ll quote liberally from his fine work and add some additional context where fitting. Now on to the geekery: First things first, you’ll need to create a mapping of which servernames correlate to which certs (client SSL profiles in LTM’s case). This could be done in any manner, really, but the most efficient both from a resource and management perspective is to use a class. Classes, also known as DataGroups, are name->value pairs that will allow you to easily retrieve the data later in the iRule. Quoting Joel: Create a string-type datagroup to be called "tls_servername". Each hostname that needs to be supported on the VIP must be input along with its matching clientssl profile. For example, for the site "testsite.site.com" with a ClientSSL profile named "clientssl_testsite", you should add the following values to the datagroup. String: testsite.site.com Value: clientssl_testsite Once you’ve finished inputting the different server->profile pairs, you’re ready to move on to pools. It’s very likely that since you’re now managing multiple domains on this VIP you'll also want to be able to handle multiple pools to match those domains. To do that you'll need a second mapping that ties each servername to the desired pool. This could again be done in any format you like, but since it's the most efficient option and we're already using it, classes make the most sense here. Quoting from Joel: If you wish to switch pool context at the time the servername is detected in TLS, then you need to create a string-type datagroup called "tls_servername_pool". You will input each hostname to be supported by the VIP and the pool to direct the traffic towards. For the site "testsite.site.com" to be directed to the pool "testsite_pool_80", add the following to the datagroup: String: testsite.site.com Value: testsite_pool_80 If you don't, that's fine, but realize all traffic from each of these hosts will be routed to the default pool, which is very likely not what you want. Now then, we have two classes set up to manage the mappings of servername->SSLprofile and servername->pool, all we need is some app logic in line to do the management and provide each inbound request with the appropriate profile & cert. This is done, of course, via iRules. Joel has written up one heck of an iRule which is available in the codeshare (here) in it's entirety along with his solid write-up, but I'll also include it here in-line, as is my habit. Effectively what's happening is the iRule is parsing through the data sent throughout the SSL handshake process and searching for the specific TLS servername extension, which are the bits that will allow us to do the profile switching magic. He's written it up to fall back to the default client SSL profile and pool, so it's very important that both of these things exist on your VIP, or you may likely find yourself with unhappy users. One last caveat before the code: Not all browsers support Server Name Indication, so be careful not to implement this unless you are very confident that most, if not all, users connecting to this VIP will support SNI. For more info on testing for SNI compatibility and a list of browsers that do and don't support it, click through to Joel's awesome CodeShare entry, I've already plagiarized enough. So finally, the code. Again, my hat is off to Joel Moses for this outstanding example of the power of iRules. Keep at it Joel, and thanks for sharing! 1: when CLIENT_ACCEPTED { 2: if { [PROFILE::exists clientssl] } { 3: 4: # We have a clientssl profile attached to this VIP but we need 5: # to find an SNI record in the client handshake. To do so, we'll 6: # disable SSL processing and collect the initial TCP payload. 7: 8: set default_tls_pool [LB::server pool] 9: set detect_handshake 1 10: SSL::disable 11: TCP::collect 12: 13: } else { 14: 15: # No clientssl profile means we're not going to work. 16: 17: log local0. "This iRule is applied to a VS that has no clientssl profile." 18: set detect_handshake 0 19: 20: } 21: 22: } 23: 24: when CLIENT_DATA { 25: 26: if { ($detect_handshake) } { 27: 28: # If we're in a handshake detection, look for an SSL/TLS header. 29: 30: binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen 31: 32: # TLS is the only thing we want to process because it's the only 33: # version that allows the servername extension to be present. When we 34: # find a supported TLS version, we'll check to make sure we're getting 35: # only a Client Hello transaction -- those are the only ones we can pull 36: # the servername from prior to connection establishment. 37: 38: switch $tls_version { 39: "769" - 40: "770" - 41: "771" { 42: if { ($tls_xacttype == 22) } { 43: binary scan [TCP::payload] @5c tls_action 44: if { not (($tls_action == 1) && ([TCP::payload length] > $tls_recordlen)) } { 45: set detect_handshake 0 46: } 47: } 48: } 49: default { 50: set detect_handshake 0 51: } 52: } 53: 54: if { ($detect_handshake) } { 55: 56: # If we made it this far, we're still processing a TLS client hello. 57: # 58: # Skip the TLS header (43 bytes in) and process the record body. For TLS/1.0 we 59: # expect this to contain only the session ID, cipher list, and compression 60: # list. All but the cipher list will be null since we're handling a new transaction 61: # (client hello) here. We have to determine how far out to parse the initial record 62: # so we can find the TLS extensions if they exist. 63: 64: set record_offset 43 65: binary scan [TCP::payload] @${record_offset}c tls_sessidlen 66: set record_offset [expr {$record_offset + 1 + $tls_sessidlen}] 67: binary scan [TCP::payload] @${record_offset}S tls_ciphlen 68: set record_offset [expr {$record_offset + 2 + $tls_ciphlen}] 69: binary scan [TCP::payload] @${record_offset}c tls_complen 70: set record_offset [expr {$record_offset + 1 + $tls_complen}] 71: 72: # If we're in TLS and we've not parsed all the payload in the record 73: # at this point, then we have TLS extensions to process. We will detect 74: # the TLS extension package and parse each record individually. 75: 76: if { ([TCP::payload length] >= $record_offset) } { 77: binary scan [TCP::payload] @${record_offset}S tls_extenlen 78: set record_offset [expr {$record_offset + 2}] 79: binary scan [TCP::payload] @${record_offset}a* tls_extensions 80: 81: # Loop through the TLS extension data looking for a type 00 extension 82: # record. This is the IANA code for server_name in the TLS transaction. 83: 84: for { set x 0 } { $x < $tls_extenlen } { incr x 4 } { 85: set start [expr {$x}] 86: binary scan $tls_extensions @${start}SS etype elen 87: if { ($etype == "00") } { 88: 89: # A servername record is present. Pull this value out of the packet data 90: # and save it for later use. We start 9 bytes into the record to bypass 91: # type, length, and SNI encoding header (which is itself 5 bytes long), and 92: # capture the servername text (minus the header). 93: 94: set grabstart [expr {$start + 9}] 95: set grabend [expr {$elen - 5}] 96: binary scan $tls_extensions @${grabstart}A${grabend} tls_servername 97: set start [expr {$start + $elen}] 98: } else { 99: 100: # Bypass all other TLS extensions. 101: 102: set start [expr {$start + $elen}] 103: } 104: set x $start 105: } 106: 107: # Check to see whether we got a servername indication from TLS. If so, 108: # make the appropriate changes. 109: 110: if { ([info exists tls_servername] ) } { 111: 112: # Look for a matching servername in the Data Group and pool. 113: 114: set ssl_profile [class match -value [string tolower $tls_servername] equals tls_servername] 115: set tls_pool [class match -value [string tolower $tls_servername] equals tls_servername_pool] 116: 117: if { $ssl_profile == "" } { 118: 119: # No match, so we allow this to fall through to the "default" 120: # clientssl profile. 121: 122: SSL::enable 123: } else { 124: 125: # A match was found in the Data Group, so we will change the SSL 126: # profile to the one we found. Hide this activity from the iRules 127: # parser. 128: 129: set ssl_profile_enable "SSL::profile $ssl_profile" 130: catch { eval $ssl_profile_enable } 131: if { not ($tls_pool == "") } { 132: pool $tls_pool 133: } else { 134: pool $default_tls_pool 135: } 136: SSL::enable 137: } 138: } else { 139: 140: # No match because no SNI field was present. Fall through to the 141: # "default" SSL profile. 142: 143: SSL::enable 144: } 145: 146: } else { 147: 148: # We're not in a handshake. Keep on using the currently set SSL profile 149: # for this transaction. 150: 151: SSL::enable 152: } 153: 154: # Hold down any further processing and release the TCP session further 155: # down the event loop. 156: 157: set detect_handshake 0 158: TCP::release 159: } else { 160: 161: # We've not been able to match an SNI field to an SSL profile. We will 162: # fall back to the "default" SSL profile selected (this might lead to 163: # certificate validation errors on non SNI-capable browsers. 164: 165: set detect_handshake 0 166: SSL::enable 167: TCP::release 168: 169: } 170: } 171: }3.8KViews0likes18CommentsFTPS Offload via iRules
Question: Does BIG-IP LTM support FTPS? Answer: You might think to yourself "LTM can load balance any IP traffic, so sure!". But if you know FTPS, you know that, like FTP, things are a lot more complicated than most protocols. And although there is an FTP profile to allow us to effortlessly support FTP, there is no FTPS profile. And since FTPS involves encryption, iRules become tough. But the answer is "Yes, we can load balance FTPS, with a little iRule help." Question: Can BIG-IP LTM offload encryption from FTPS? Answer: You might know that FTPS uses regular SSL, just like HTTPS, so you might think you could just use a clientssl profile. I'd like to say you are right, but at least today that won't work. However, with some iRule help, we can offload FTPS as long as we don't need to support Active transfers (see below). Question: Does BIG-IP LTM support SFTP? Answer: SFTP is in no way related to FTPS. SFTP uses the SSH protocol. Even though the LTM uses SSH for administrative purposes, we cannot decrypt or offload SSH traffic in the traffic path. Fortunately, since SFTP uses one simple TCP connection from each client to each server, we can load balance SFTP just like we can any other generic TCP traffic. FTP Basics --------------- Since FTPS is simply an encrypted version of FTP, we need to talk a little about FTP first. Control Channel: the FTP client connects to the FTP server on port 21. This connection is called the Control Channel. The control channel is how the client logs in, changes directories, requests file listings, and requests file transfers. Active FTP: the original way to transfer files and directory listings via FTP is called Active Mode. The client issues a PORT command and tells the server its IP address and a port to connect to. The server then opens a new TCP connection to the client and begins the transfer. The outbound data connection from the server always originates on port 20. Passive FTP: active FTP has long plagued firewalls and NAT environments and in general it doesn't make as much sense today for the server to be initiating new connections to the client. So Passive FTP is often the default behavior in FTP clients (such as web browsers) today. With Passive FTP transfers, the client issues a PASV command and the server responds with an IP address and a port. The client then connects to that port and the transfer begins. The FTP server will generally give out its own IP address but most FTP servers can be configured to give out a specific IP address (such as the VIP address in a load balancing environment). BIG-IP FTP Profile: so why do we need an FTP profile on the BIG-IP? It provides a number of important features: 1) When the server creates outbound TCP connections for active transfers, the profile allows these connection to be established even if they don't match a virtual server (essentially they match the virtual server with the FTP profile). In addition, if SNAT is being used, when the client sends its IP address in the PORT command, the profile changes this IP to the SNAT address to make sure the connection passes through the BIG-IP. 2) When the client sends a PASV command to initiate a passive transfer, the FTP profile will change the IP address provided by the server so it is the VIP address. The LTM will then allow this connection from the client (on the VIP address but on a port other than 21) even though there is no explicit virtual server defined on that port. 3) While a transfer is occurring, the control channel remains idle. Without the FTP profile, a long transfer may cause the control channel to be closed due to an idle timeout. When the transfer completes, most clients will indicate a failure because the control connection was lost. The FTP profile ties the control and data connections together so that as long as one of them is active neither will time out. Overview of FTPS ---------------- FTPS is a secure implementation of the FTP protocol. It has no relation at all to SFTP. The old way of using FTPS was called "Implicit FTPS". The way this works is that the client connects to a special FTPS port (usually 990) and immediately begins an SSL handshake (the same as a web browser does when connecting to port 443). We won't cover Implicit FTPS in this discussion as it is generally considered deprecated, but both of these solutions should work for it with little or no modification. The modern implementation of FTPS is called Explicit FTPS. The way this works is the FTPS client connects to the server on port 21 just like an FTP client. Then the client issues either an "AUTH TLS" or "AUTH SSL" command. Once this command is issued, the server acknowledges it and the client begins an SSL handshake. From that point forward, the control channel (client connection to port 21) is encrypted and one can't see the commands within that channel without decrypting it first. Many clients, however, will issue the Clear Control Channel (CCC) command after logging in so the rest of the control session will return to plain text. This is to benefit network firewalls and other devices that rely on seeing inside of FTP control channels to allow data connections to be opened and timeouts of the two separate channels to be linked together. The data channel, by default, will also be encrypted. If the server connects to the client (active FTP transfer), the client will first begin SSL negotiation. If the client connects to the server (passive FTP transfer), the client will also begin SSL negotiation. Note that the BIG-IP FTP Profile does not currently support FTPS. If the client sends the AUTH TLS or AUTH SSL commands, the message will be ignored (not sent to the server) and the client will hang waiting for a response from the server. For more information, Wikipedia has a great write-up here: http://en.wikipedia.org/wiki/FTPS. Solution #1: Load balancing FTPS -------------------------------- With a simple iRule and the proper LTM configuration, you can fully load balance both FTP and Explicit FTPS. This will support both active and passive transfers. A clear control channel is not required but it does not hurt either. 1) Servers must point their default gateway to the LTM (we can't use SNAT because we can't see or alter the client's IP in PORT commands over encrypted FTPS) 2) Servers must be configured to hand out the VIP address for any Passive transfers (we can't modify the IP address the server sends in response to PASV commands because this may be encrypted). 3) An inbound virtual server is defined on the VIP address and the FTP port (21). FTP profile is *not* enabled as this will break FTPS. Timeout needs to be long as the control channel will sit idle during a long transfer and if the control channel is closed the transfer will fail. 4) Another inbound virtual server is defined on the VIP address and All Ports. This will catch all Passive FTP transfer connections from clients. 5) A source address persistence profile is defined that matches across services and across virtuals. It is applied to inbound both virtual servers so that the passive transfer connections are sent to the same server that has the control connection from that client. 6) Since the servers are using the LTM as their default gateway, you will probably need a default Forwarding (IP) virtual server doing all addresses, all ports, all protocols. 7) You will also need a forwarding virtual server that specifically matches outbound TCP traffic (but still all IPs and all ports). This virtual server will catch outbound Active FTP transfers and any other outbound TCP connections. The following iRule needs to be applied to this virtual server to make sure that any outbound active FTP transfers (coming from port 20) are SNAT'd to the VIP address (W.X.Y.Z): when CLIENT_ACCEPTED { if { [TCP::remote_port] eq "20"} { snat W.X.Y.Z 20 } } Here is how this works: Active transfers: the client connects to the VIP and is load balanced to a server. SSL begins on this connection. The client issues a PORT command to transfer a file and includes its own IP address and a port. The server then initiates an outbound TCP connection to that client IP and port which goes through the LTM because the LTM is the default gateway of the server. This connection matches the outbound TCP forwarding virtual server defined in step #7. The source port of this connection will always be port 20 so the data connection will be SNAT'd by the iRule. Passive transfers: the client connects to the VIP and is load balanced to a server. SSL begins on this connection. The client issues a PASV command to transfer a file. The server responds with the VIP address (since it was configured that way in step #2) on a random port. When the client connects to that new port, it matches the other inbound virtual server (#4 above). Because of the persistence profile, the new inbound connection will be load balanced to the same server that the control connection was already connected to. Note that if you could assume that clients will always issue the Clear Control Channel (CCC) command after authentication, you could use SNAT and the server would not have to hand out the VIP address if you wrote additional iRules to do the proper modifications (i.e. basically simulate the functionality of the FTP profile in an iRule). Solution #2: Offloading SSL for FTPS ------------------------------------ This solution will handle both load balancing and offloading of SSL for FTPS. It will NOT support Active FTPS transfers -- only Passive FTPS transfers will work (This is because of the strange way active FTPS SSL negotiations work -- the server initiates a connection to the client but the client begins the SSL handshake). This solution could support the Clear Control Channel command but currently does not. This is really a pretty experimental solution and would need to be improved to make it more robust in a production environment. The second iRule has some code commented out to replace the IP the server sends for passive transfers, but it would be easier just to configure your server to hand out the VIP address. x.x.x.x: external VIP address y.y.y.y: any internal IP address that is not in use 1) You must be running 9.4.x as we will be using the "virtual" command to send traffic to another Virtual Server. 2) A source address persistence profile is defined that matches across services and across virtuals. It is applied to inbound both virtual servers so that the passive transfer connections are sent to the same server that has the control connection from that client. 3) Define first virtual server (where the clients actually connect to) on VIP address x.x.x.x and port 21. This virtual server needs to have a CLIENTSSL profile associated with it with a valid SSL certificate for the server (the same way you'd do it if you were offloading SSL for HTTPS). This virtual server does not need a default pool. This virtual server also needs this iRule applied to it (with y.y.y.y replaced with the actual internal IP address): when CLIENT_ACCEPTED { log local0. "client accepted" SSL::disable TCP::respond "220 My ftp server\r\n" TCP::collect } when CLIENT_DATA { log local0. "client data" TCP::respond "234 AUTH TLS Successful\r\n" TCP::payload replace 0 [TCP::payload length] "" virtual VS2FTP SSL::enable TCP::release log local0. "TCP Release Completed" } 3) Define a secondstandard virtual server on an internal address (where the first virtual server connects to) named "internal-y.y.y.y-999" with FTP servers (on port 21) as pool members. Apply the persistence profile. It needs the following iRule: when CLIENT_ACCEPTED { TCP::collect } when CLIENT_DATA { if { [TCP::payload] contains "PBSZ" } { TCP::payload replace 0 [TCP::payload length] "" TCP::respond "200 PBSZ 0 successful\r\n" } elseif { [TCP::payload] contains "PROT P" } { TCP::respond "200 Protection set to Private\r\n" TCP::payload replace 0 [TCP::payload length] "" } elseif { [TCP::payload] contains "FEAT" } { TCP::payload replace 0 [TCP::payload length] "" TCP::respond "211-Features: MDTM REST STREAM SIZE AUTH TLS PBSZ PROT\r\n211 End\r\n" } TCP::release TCP::collect } when SERVER_CONNECTED { TCP::collect } when SERVER_DATA { if { [TCP::payload] contains "220 " } { TCP::payload replace 0 [TCP::payload length] "" } elseif { [TCP::payload] contains "Entering Passive Mode" } { # You need to modify this section if your servers are not # configured to hand out the VIP address for Passive transfers. #regsub {10,10,71,1} [TCP::payload] "172,16,59,163" tmpstr #TCP::payload replace 0 [TCP::payload length] $tmpstr } TCP::release TCP::collect } 4) The third virtual server catches inbound client passive transfer connections. It is defined on VIP address x.x.x.x and all ports. It must have the same CLIENTSSL profile as the first virtual server, the same pool as the first virtual server, and the same persistence profile as the first virtual server. It needs this iRule as well: when CLIENT_ACCEPTED { SSL::disable TCP::collect 0 0 } when CLIENT_DATA { SSL::enable } when SERVER_CONNECTED { SSL::enable clientside }3.1KViews0likes18CommentsSelective Client Cert Authentication
SSL encryption on the web is not a new concept to the general population of the internet. Those of us that frequent many websites per week (day, hour, minute, etc.) are quite used to making use of SSL encryption for security purposes. It's an accepted standard, and we're all fairly used to dealing with it in varied capacities. Whether it's that nifty yellow URL bar in Firefox, or the security warning saying that portions of the site you're going to are unencrypted, we've likely seen it before, and are comfortable with it in day to day operation. What if, however, I wanted to get more out of my certificates? One of the more common, behind the scenes things that gets done with certificates is authentication. Via client-cert authentication users can have a "passwordless" user experience, automatic authentication into multiple apps with different access levels, and a smooth browsing experience with the applications in question. Combine these nice to have features with improved security, as it's much harder to spoof a client-cert than it is a password, and it's not surprising we're seeing a fair amount of companies putting this type of authentication into place. That's all well and good, but what if you don't want your entire site to be authenticated this way? What if you only want users trying to access certain portions of the site to be required to present a valid client-cert? What's more, what if you need to pass along some of the information from the certificate to the back end application? Extracting things like the issuer, subject and version can be necessary in some of these situations. That's a fair amount of application layer overhead to put on your application servers - inspecting every client request, determining the intended location, negotiating client-cert authentication if necessary, passing that info on, etc. etc. Wouldn't it be nice if you could not only offload all of this overhead, but the management overhead of the setup as well? As is often the case, with iRules, you can. With the below example iRule not only can you selectively require a certificate from the inbound users depending on, in this case the requested URI, but you can also extract valuable cert information from the client and insert it into HTTP headers to be passed back to the application servers for whatever processing needs they might have. This allows you to fine-tune the user experience of your application or site for those users who need access via client-cert authentication, but not affect those that don't. You can even custom define the actions for the iRule to take in the case that a user requests a URI that requires authentication, but doesn't have the appropriate cert. There is a little configuration that needs to be done, like setting up a Client SSL profile to decrypt the SSL traffic coming in, but that should be simple enough. The iRule itself is pretty straight-forward. It uses the matchclass command to compare the URI to a list of known URIs that require authentication (class not shown in the example). If it finds a match, it uses the SSL commands to check for and require a certificate. Once this is found it uses the X509 commands to poll cert information and include it in some custom HTTP headers that the back end servers can look for. when CLIENTSSL_CLIENTCERT { HTTP::release if { [SSL::cert count] < 1 } { reject } } when HTTP_REQUEST { if { [matchclass [HTTP::uri] starts_with $::requires_client_cert] } { if { [SSL::cert count] <= 0 } { HTTP::collect SSL::authenticate always SSL::authenticate depth 9 SSL::cert mode require SSL::renegotiate } } } when HTTP_REQUEST_SEND { clientside { if { [SSL::cert count] > 0 } { HTTP::header insert "X-SSL-Session-ID"[SSL::sessionid] HTTP::header insert "X-SSL-Client-Cert-Status"[X509::verify_cert_error_string [SSL::verify_result]] HTTP::header insert "X-SSL-Client-Cert-Subject"[X509::subject [SSL::cert 0]] HTTP::header insert "X-SSL-Client-Cert-Issuer"[X509::issuer [SSL::cert 0]] } } } As you can see there is a fair amount of room for further customization, as was partly mentioned above. Things like dealing with custom error pages or routing for requests that should require authentication but don't provide a cert, allowing different levels of access based on the cert information collected, etc. All in all this iRule represents a relatively simple solution to a more complex problem and does so in a manner that's easy to implement and maintain. That's the power of iRules, in a nutshell. Get the Flash Player to see this player.2.7KViews1like6Comments