Controlling a Pool Members Ratio and Priority Group with iControl
A Little Background A question came in through the iControl forums about controlling a pool members ratio and priority programmatically. The issue really involves how the API’s use multi-dimensional arrays but I thought it would be a good opportunity to talk about ratio and priority groups for those that don’t understand how they work. In the first part of this article, I’ll talk a little about what pool members are and how their ratio and priorities apply to how traffic is assigned to them in a load balancing setup. The details in this article were based on BIG-IP version 11.1, but the concepts can apply to other previous versions as well. Load Balancing In it’s very basic form, a load balancing setup involves a virtual ip address (referred to as a VIP) that virtualized a set of backend servers. The idea is that if your application gets very popular, you don’t want to have to rely on a single server to handle the traffic. A VIP contains an object called a “pool” which is essentially a collection of servers that it can distribute traffic to. The method of distributing traffic is referred to as a “Load Balancing Method”. You may have heard the term “Round Robin” before. In this method, connections are passed one at a time from server to server. In most cases though, this is not the best method due to characteristics of the application you are serving. Here are a list of the available load balancing methods in BIG-IP version 11.1. Load Balancing Methods in BIG-IP version 11.1 Round Robin: Specifies that the system passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. This method works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory. Ratio (member): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine within the pool. Least Connections (member): Specifies that the system passes a new connection to the node that has the least number of current connections in the pool. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. Observed (member): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (member), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (member): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Ratio (node): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine across all pools of which the server is a member. Least Connections (node): Specifies that the system passes a new connection to the node that has the least number of current connections out of all pools of which a node is a member. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node, or the fastest node response time. Fastest (node): Specifies that the system passes a new connection based on the fastest response of all pools of which a server is a member. This method might be particularly useful in environments where nodes are distributed across different logical networks. Observed (node): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (node), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (node): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Dynamic Ratio (node) : This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. Fastest (application): Passes a new connection based on the fastest response of all currently active nodes in a pool. This method might be particularly useful in environments where nodes are distributed across different logical networks. Least Sessions: Specifies that the system passes a new connection to the node that has the least number of current sessions. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current sessions. Dynamic Ratio (member): This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. L3 Address: This method functions in the same way as the Least Connections methods. We are deprecating it, so you should not use it. Weighted Least Connections (member): Specifies that the system uses the value you specify in Connection Limit to establish a proportional algorithm for each pool member. The system bases the load balancing decision on that proportion and the number of current connections to that pool member. For example,member_a has 20 connections and its connection limit is 100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its connection limit is 200, so it is at 10% of capacity. In this case, the system select selects member_b. This algorithm requires all pool members to have a non-zero connection limit specified. Weighted Least Connections (node): Specifies that the system uses the value you specify in the node's Connection Limitand the number of current connections to a node to establish a proportional algorithm. This algorithm requires all nodes used by pool members to have a non-zero connection limit specified. Ratios The ratio is used by the ratio-related load balancing methods to load balance connections. The ratio specifies the ratio weight to assign to the pool member. Valid values range from 1 through 100. The default is 1, which means that each pool member has an equal ratio proportion. So, if you have server1 a with a ratio value of “10” and server2 with a ratio value of “1”, server1 will get served 10 connections for every one that server2 receives. This can be useful when you have different classes of servers with different performance capabilities. Priority Group The priority group is a number that groups pool members together. The default is 0, meaning that the member has no priority. To specify a priority, you must activate priority group usage when you create a new pool or when adding or removing pool members. When activated, the system load balances traffic according to the priority group number assigned to the pool member. The higher the number, the higher the priority, so a member with a priority of 3 has higher priority than a member with a priority of 1. The easiest way to think of priority groups is as if you are creating mini-pools of servers within a single pool. You put members A, B, and C in to priority group 5 and members D, E, and F in priority group 1. Members A, B, and C will be served traffic according to their ratios (assuming you have ratio loadbalancing configured). If all those servers have reached their thresholds, then traffic will be distributed to servers D, E, and F in priority group 1. he default setting for priority group activation is Disabled. Once you enable this setting, you can specify pool member priority when you create a new pool or on a pool member's properties screen. The system treats same-priority pool members as a group. To enable priority group activation in the admin GUI, select Less than from the list, and in the Available Member(s) box, type a number from 0 to 65535 that represents the minimum number of members that must be available in one priority group before the system directs traffic to members in a lower priority group. When a sufficient number of members become available in the higher priority group, the system again directs traffic to the higher priority group. Implementing in Code The two methods to retrieve the priority and ratio values are very similar. They both take two parameters: a list of pools to query, and a 2-D array of members (a list for each pool member passed in). long [] [] get_member_priority( in String [] pool_names, in Common__AddressPort [] [] members ); long [] [] get_member_ratio( in String [] pool_names, in Common__AddressPort [] [] members ); The following PowerShell function (utilizing the iControl PowerShell Library), takes as input a pool and a single member. It then make a call to query the ratio and priority for the specific member and writes it to the console. function Get-PoolMemberDetails() { param( $Pool = $null, $Member = $null ); $AddrPort = Parse-AddressPort $Member; $RatioAofA = (Get-F5.iControl).LocalLBPool.get_member_ratio( @($Pool), @( @($AddrPort) ) ); $PriorityAofA = (Get-F5.iControl).LocalLBPool.get_member_priority( @($Pool), @( @($AddrPort) ) ); $ratio = $RatioAofA[0][0]; $priority = $PriorityAofA[0][0]; "Pool '$Pool' member '$Member' ratio '$ratio' priority '$priority'"; } Setting the values with the set_member_priority and set_member_ratio methods take the same first two parameters as their associated get_* methods, but add a third parameter for the priorities and ratios for the pool members. set_member_priority( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] priorities ); set_member_ratio( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] ratios ); The following Powershell function takes as input the Pool and Member with optional values for the Ratio and Priority. If either of those are set, the function will call the appropriate iControl methods to set their values. function Set-PoolMemberDetails() { param( $Pool = $null, $Member = $null, $Ratio = $null, $Priority = $null ); $AddrPort = Parse-AddressPort $Member; if ( $null -ne $Ratio ) { (Get-F5.iControl).LocalLBPool.set_member_ratio( @($Pool), @( @($AddrPort) ), @($Ratio) ); } if ( $null -ne $Priority ) { (Get-F5.iControl).LocalLBPool.set_member_priority( @($Pool), @( @($AddrPort) ), @($Priority) ); } } In case you were wondering how to create the Common::AddressPort structure for the $AddrPort variables in the above examples, here’s a helper function I wrote to allocate the object and fill in it’s properties. function Parse-AddressPort() { param($Value); $tokens = $Value.Split(":"); $r = New-Object iControl.CommonAddressPort; $r.address = $tokens[0]; $r.port = $tokens[1]; $r; } Download The Source The full source for this example can be found in the iControl CodeShare under PowerShell PoolMember Ratio and Priority.28KViews0likes3CommentsAccessing TCP Options from iRules
I’ve written several articles on the TCP profile and enjoy digging into TCP. It’s a beast, and I am constantly re-learning the inner workings. Still etched in my visual memory map, however, is the TCP header format, shown in Figure 1 below. Since 9.0 was released, TCP payload data (that which comes after the header) has been consumable in iRules via the TCP::payload and the port information has been available in the contextual commands TCP::local_port/TCP::remote_port and of course TCP::client_port/TCP::server_port. Options, however, have been inaccessible. But beginning with version 10.2.0-HF2, it is now possible to retrieve data from the options fields. Preparing the BIG-IP Prior to version 11.0, it was necessary to set a bigpipe database key with the option (or options) of interest: bigpipe db Rules.Tcpoption.settings [option, first|last], [option, first|last] In version 11.0 and forward, the DB keys are no more and you need to create a tcp profile with the these options defined, like so: ltm profile tcp tcp_opt { app-service none tcp-options "{option first|last} {option first|last}" } The option is an integer between 2 and 255, and the first/last setting indicates whether the system will retain the first or last instance of the specified option. Once that key is set, you’ll need to do a bigstart restart for it to take (warning: service impacting). Note also that the LTM only collects option data starting with the ACK of a connection. The initial SYN is ignored even if you select the first keyword. This is done to prevent a SYN flood attack (in keeping with SYN-cookies). A New iRules Command: TCP::option The TCP::option command has the following syntax: TCP::option get <option> v11 Additions/Changes: TCP::option set <option number> <value> <next|all> TCP::option noset <option number> Pretty simple, no? So now that you can access them, what fun can be had? Real World Scenario: Akamai In Akamai’s IPA and SXL product lines, they support client IP visibility by embedding a version number (one byte) and an IPv4 address (four bytes) as part of their overlay path feature in tcp option number 28. To access this data, we first set the database key: tmsh create ltm profile tcp tcp_opt tcp-options "{28 first}" Now, the iRule utilizing the TCP::option command: when CLIENT_ACCEPTED { set opt28 [TCP::option get 28] if { [string length $opt28] == 5 } { binary scan $opt28 cH8 ver addr if { $ver != 1 } { log local0. "Unsupported Akamai version: $ver" } else { scan $addr "%2x%2x%2x%2x" ip1 ip2 ip3 ip4 set optaddr "$ip1.$ip2.$ip3.$ip4" } } } when HTTP_REQUEST { if { [info exists optaddr] } { HTTP::header insert "X-Forwarded-For" $optaddr } } The Akamai version should be one, so we log if not. Otherwise, we take the address (stored in the variable addr in hex) and scan it to get the decimal equivalents to build the address for inserting in the X-Forwarded-For header. Cool, right? Also cool—along with the new TCP::option command , an extension was made to the IP::addr command to parse binary fields into a dotted decimal IP address. This extension is also available beginning in 10.2.0-HF2, but extended in 11.0. Here’s the syntax: IP::addr parse [-ipv4 | -ipv6 [swap]] <binary field> [<offset>] So for example, if you had an IPv6 address in option 28 with a 1 byte offset, you would parse that like: log local0. "IP::addr parse IPv6 output: [IP::addr parse -ipv6 [TCP::option get 28] 1]" ## Log Result ## May 27 21:51:34 ltm13 info tmm[27207]: Rule /Common/tcpopt_test <CLIENT_ACCEPTED>: IP::addr parse IPv6 output: 2601:1930:bd51:a3e0:20cd:a50b:1cc1:ad13 But in the context of our TCP option, we have 5-bytes of data with the first byte not mattering in the context of an address, so we get at the address with this: set optaddr [IP::addr parse -ipv4 [TCP::option get 28] 1] This cleans up the rule a bit: when CLIENT_ACCEPTED { set opt28 [TCP::option get 28] if { [string length $opt28] == 5 } { binary scan $opt c ver if { $ver != 1 } { log local0. "Unsupported Akamai version: $ver" } else { set optaddr [IP::addr parse -ipv4 $opt28 1] } } } when HTTP_REQUEST { if { [info exists optaddr] } { HTTP::header insert "X-Forwarded-For" $optaddr } } No need to store the address in the first binary scan and no need for the scan command at all so I eliminated those. Setting a forwarding header is not the only thing we can do with this data. It could also be shipped off to a logging server, or used as a snat address (assuming the server had either a default route to the BIG-IP, or specific routes for the customer destinations, which is doubtful). Logging is trivial, shown below with the log command. The HSL commands could be used in lieu of log if sending off-box to a log server. when CLIENT_ACCEPTED { set opt28 [TCP::option get 28] if { [string length $opt28] == 5 } { binary scan $opt c ver if { $ver != 1 } { log local0. "Unsupported Akamai version: $ver" } else { set optaddr [IP::addr parse -ipv4 $opt28 1] log local0. "Client IP extracted from Akamai TCP option is $optaddr" } } } If setting the provided IP as a snat address, you’ll want to make sure it’s a valid IP address before doing so. You can use the TCL catch command and IP::addr to perform this check as seen in the iRule below: when CLIENT_ACCEPTED { set addrs [list \ "192.168.1.1" \ "256.168.1.1" \ "192.256.1.1" \ "192.168.256.1" \ "192.168.1.256" \ ] foreach x $addrs { if { [catch {IP::addr $x mask 255.255.255.255}] } { log local0. "IP $x is invalid" } else { log local0. "IP $x is valid" } } } The output of this iRule: <CLIENT_ACCEPTED>: IP 192.168.1.1 is valid <CLIENT_ACCEPTED>: IP 256.168.1.1 is invalid <CLIENT_ACCEPTED>: IP 192.256.1.1 is invalid <CLIENT_ACCEPTED>: IP 192.168.256.1 is invalid <CLIENT_ACCEPTED>: IP 192.168.1.256 is invalid Adding this logic into a functional rule with snat: when CLIENT_ACCEPTED { set opt28 [TCP::option get 28] if { [string length $opt28] == 5 } { binary scan $opt c ver if { $ver != 1 } { log local0. "Unsupported Akamai version: $ver" } else { set optaddr [IP::addr parse -ipv4 $opt28 1] if { [catch {IP::addr $x mask 255.255.255.255}] } { log local0. "$optaddr is not a valid address" snat automap } else { log local0. "Akamai inserted Client IP is $optaddr. Setting as snat address." snat $optaddr } } } Alternative TCP Option Use Cases The Akamai solution shows an application implementation taking advantage of normally unused space in TCP headers. There are, however, defined uses for several option “kind” numbers. The list is available here: http://www.iana.org/assignments/tcp-parameters/tcp-parameters.xml. Some options that might be useful in troubleshooting efforts: Opkind 2 – Max Segment Size Opkind 3 – Window Scaling Opkind 5 – Selective Acknowledgements Opkind 8 – Timestamps Of course, with tcpdump you get all this plus the context of other header information and data, but hey, another tool in the toolbox, right? Addendum I've been working with F5 SE Leonardo Simon on on additional examples I wanted to share here that uses option 28 or 253 to extract an IPv6 address if the version is 34 and otherwise extracts an IPv4 address if the version is 1 or 2. Option 28 when CLIENT_ACCEPTED { set opt28 [TCP::option get 28] binary scan $opt28 c ver #log local0. "version: $ver" if { $ver == 34 } { set optaddr [IP::addr parse -ipv6 $opt28 1] log local0. "opt28 ipv6 address: $optaddr" } elseif { $ver == 1 || $ver == 2 } { set optaddr [IP::addr parse -ipv4 $opt28 1] log local0. "opt28 ipv4 address: $optaddr" } } Option 253 when CLIENT_ACCEPTED { set opt253 [TCP::option get 253] binary scan $opt253 c ver #log local0. "version: $ver" if { $ver == 34 } { set optaddr [IP::addr parse -ipv6 $opt253 1] log local0. "opt253 ipv6 address: $optaddr" } elseif { $ver == 1 || $ver == 2 } { set optaddr [IP::addr parse -ipv4 $opt253 1] log local0. "opt253 ipv4 address: $optaddr" } }17KViews2likes10CommentsTwo-Factor Authentication With Google Authenticator And APM
Introduction Two-factor authentication (TFA) has been around for many years and the concept far pre-dates computers. The application of a keyed padlock and a combination lock to secure a single point would technically qualify as two-factor authentication: “something you have,” a key, and “something you know,” a combination. Until the past few years, two-factor authentication in its electronic form has been reserved for high security environments: government, banks, large companies, etc. The most common method for implementing a second authentication factor has been to issue every employee a disconnected time-based one-time password hard token. The term “disconnected” refers to the absence of a connection between the token and a central authentication server. A “hard token” implies that the device is purpose-built for authentication and serves no other purpose. A soft or “software” token on the other hand has other uses beyond providing an authentication mechanism. In the context of this article we will refer to mobile devices as a soft tokens. This fits our definition as the device an be used to make phone calls, check email, surf the Internet, all in addition to providing a time-based one-time password. A time-based one-time password (TOTP) is a single use code for authenticating a user. It can be used by itself or to supplement another authentication method. It fits the definition of “something you have” as it cannot be easily duplicated and reused elsewhere. This differs from a username and password combination, which is “something you know,” but could be easily duplicated by someone else. The TOTP uses a shared secret and the current time to calculate a code, which is displayed for the user and regenerated at regular intervals. Because the token and the authentication server are disconnected from each other, the clocks of each must be perfectly in sync. This is accomplished by using Network Time Protocol (NTP) to synchronize the clocks of each device with the correct time of central time servers. Using Google Authenticator as a soft token application makes sense from many angles. It is low cost due to the proliferation of smart phones and is available from the “app store” free of charge on all major platforms. It uses an open standard (defined by RFC 4226), which means that it is well-tested, understood, secure. Calculation as you will later see is well-documented and relatively easy to implement in your language of choice (iRules in our case). This process is explained in the next section. This Tech Tip is a follow-up to Two-Factor Authentication With Google Authenticator And LDAP. The first article in this series highlighted two-factor authentication with Google Authenticator and LDAP on an LTM. In this follow-up, we will be covering implementation of this solution with Access Policy Manager (APM). APM allows for far more granular control of network resources via access policies. Access policies are rule sets, which are intuitively displayed in the UI as flow charts. After creation, an access policy is applied to a virtual server to provide security, authentication services, client inspection, policy enforcement, etc. This article highlights not only a two-factor authentication solution, but also the usage of iRules within APM policies. By combining the extensibility of iRules with the APM’s access policies, we are able to create virtually any functionality we might need. Note: A 10-user fully-featured APM license is included with every LTM license. You do not need to purchase an additional module to use this feature if you have less than 10 users. Calculating The Google Authenticator TOTP The Google Authenticator TOTP is calculated by generating an HMAC-SHA1 token, which uses a 10-byte base32-encoded shared secret as a key and Unix time (epoch) divided into a 30 second interval as inputs. The resulting 80-byte token is converted to a 40-character hexadecimal string, the least significant (last) hex digit is then used to calculate a 0-15 offset. The offset is then used to read the next 8 hex digits from the offset. The resulting 8 hex digits are then AND’d with 0x7FFFFFFF (2,147,483,647), then the modulo of the resultant integer and 1,000,000 is calculated, which produces the correct code for that 30 seconds period. Base32 encoding and decoding were covered in my previous Tech Tip titled Base32 Encoding And Decoding With iRules . The Tech Tip details the process for decoding a user’s base32-encoded key to binary as well as converting a binary key to base32. The HMAC-SHA256 token calculation iRule was originally submitted by Nat to the Codeshare on DevCentral. The iRule was slightly modified to support the SHA-1 algorithm, but is otherwise taken directly from the pseudocode outlined in RFC 2104. These two pieces of code contribute the bulk of the processing of the Google Authenticator code. The rest is done with simple bitwise and arithmetic functions. Triggering iRules From An APM Access Policy Our previously published Google Authenticator iRule combined the functionality of Google Authenticator token verification with LDAP authentication. It was written for a standalone LTM system without the leverage of APM’s Visual Policy Editor. The issue with combining these two authentication factors in a single iRule is that their functionality is not mutually exclusive or easily separable. We can greatly reduce the complexity of our iRule by isolating functionality for Google Authenticator token verification and moving the directory server authentication to the APM access policy. APM iRules differ from those that we typically develop for LTM. iRules assigned to LTM virtual server are triggered by events that occur during connection or payload handling. Many of these events still apply to an LTM virtual server with an APM policy, but do not have perspective into the access policy. This is where we enter the realm of APM iRules. APM iRules are applied to a virtual server exactly like any other iRule, but are triggered by custom iRule event agent IDs within the access policy. When the access policy reaches an iRule event, it will trigger the ACCESS_POLICY_AGENT_EVENT iRule event. Within the iRule we can execute the ACCESS::policy agent_id command to return the iRule event ID that triggered the event. We can then match on this ID string prior to executing any additional code. Within the iRule we can get and set APM session variables with the ACCESS::session command, which will serve as our conduit for transferring variables to and from our access policy. A visual walkthrough of this paragraph is shown below. iRule Trigger Process Create an iRule Event in the Visual Policy Editor Specify a Name for the object and an ID for the Custom iRule Event Agent Create an iRule with the ID referenced and assign it to the virtual server 1: when ACCESS_POLICY_AGENT_EVENT { 2: if { [ACCESS::policy agent_id] eq "ga_code_verify" } { 3: # get APM session variables 4: set username [ACCESS::session data get session.logon.last.username] 5: 6: ### Google Authenticator token verification (code omitted for brevity) ### 7: 8: # set APM session variables 9: ACCESS::session data set session.custom.ga_result $ga_result 10: } 11: } Add branch rules to the iRule Event which read the custom session variable and handle the result Google Authenticator Two-Factor Authentication Process Two-Factor Authentication Access Policy Overview Rather than walking through the entire process of configuring the access policy from scratch, we’ll look at the policy (available for download at the bottom of this Tech Tip) and discuss the flow. The policy has been simplified by creating macros for the redundant portions of the authentication process: Google Authenticator token verification and the two-factor authentication processes for LDAP and Active Directory. The “Google Auth verification” macro consists of an iRule event and 5 branch rules. The number of branch rules could be reduced to just two: success and failure. This would however limit our diagnostic capabilities should we hit a snag during our deployment, so we added logging for all of the potential failure scenarios. Remember that these logs are sent to APM reporting (Web UI: Access Policy > Reports) not /var/log/ltm. APM reporting is designed to provide per-session logging in the user interface without requiring grepping of the log files. The LDAP and Active Directory macros contain the directory server authentication and query mechanisms. Directory server queries are used to retrieve user information from the directory server. In this case we can store our Google Authenticator key (shared secret) in a schema attribute to remove a dependency from our BIG-IP. We do however offer the ability to store the key in a data group as well. The main portion of the access policy is far simpler and easier to read by using macros. When the user first enters our virtual server we look at the Landing URI they are requesting. A first time request will be sent to the “normal” logon page. The user will then input their credentials along with the one-time password provided by the Google Authenticator token. If the user’s credentials and one-time password match, they are allowed access. If they fail the authentication process, we increment a counter via a table in our iRule and redirect them back to an “error” logon page. The “error” logon page notifies them that their credentials are invalid. The notification makes no reference as to which of the two factors they failed. If the user exceeds the allowed number of failures for a specified period of time, their session will be terminated and they will be unable to login for a short period of time. An authenticated user would be allowed access to secured resources for the duration of their session. Deploying Google Authenticator Token Verification This solution requires three components (one optional) for deployment: Sample access policy Google Authenticator token verification iRule Google Authenticator token generation iRule (optional) The process for deploying this solution has been divided into four sections: Configuring a AAA server Login to the Web UI of your APM From the side panel select Access Policy > AAA Servers > Active Directory, then the + next to the text to create a new AD server Within the AD creation form you’ll need to provide a Name, Domain Controller, Domain Name, Admin Username, and Admin Password When you have completed the form click Finished Copy the iRule to BIG-IP and configure options Download a copy of the Google Authenticator Token Verification iRule for APM from the DevCentral CodeShare (hint: this is much easier if you “edit” the wiki page to display the source without the line numbers and formatting) Navigate to Local Traffic > iRules > iRule List and click the + symbol Name the iRule '”google_auth_verify_apm,” then copy and paste the iRule from the CodeShare into the Definition field At the top of the iRule there are a few options that need to be defined: lockout_attempts - number of attempts a user is allowed to make prior to being locked out temporarily (default: 3 attempts) lockout_period - duration of lockout period (default: 30 seconds) ga_code_form_field - name of HTML form field used in the APM logon page, this field is define in the "Logon Page" access policy object (default: ga_code_attempt) ga_key_storage - key storage method for users' Google Authenticator shared keys, valid options include: datagroup, ldap, or ad (default: datagroup) ga_key_ldap_attr - name of LDAP schema attribute containing users' key ga_key_ad_attr - name of Active Directory schema attribute containing users' key ga_key_dg - data group containing user := key mappings Click Finished when you’ve configured the iRule options to your liking Import sample access policy From the Web UI, select Access Policy > Access Profiles > Access Profiles List In the upper right corner, click Import Download the sample policy for Two-Factor Authentication With Google Authenticator And APM and extract the .conf from ZIP archive Fill in the New Profile Name with a name of your choosing, then select Choose File, navigate to the extracted sample policy and Open Click Import to complete the import policy The sample policy’s AAA servers will likely not work in your environment, from the Access Policy List, click Edit next to the imported policy When the Visual Policy Editor opens, expand the macro (LDAP or Active Directory auth) that describe your environment Click the AD Auth object, select the AD server from the drop-down that was defined earlier in the AAA Servers step, then click Save Repeat this process for the AD Query object Assign sample policy and iRule to a virtual server From the Web UI, select Local Traffic > Virtual Servers > Virtual Server List, then the create button (+) In the New Virtual Server form, fill in the Name, Destination address, Service Port (should be HTTPS/443), next select an HTTP profile and anSSL Profile (Client). Next you’ll add a SNAT Profile if needed, an Access Profile, and finally the token verification iRule Depending on your deployment you may want to add a pool or other network connectivity resources Finally click Finished At this point you should have a function virtual server that is serving your access policy. You’ll now need to add some tokens for your users. This process is another section on its own and is listed below. Generating Software Tokens For Users In addition to the Google Authenticator Token Verification iRule for APM we also wrote a Google Authenticator Soft Token Generator iRule that will generate soft tokens for your users. The iRule can be added directly to an HTTP virtual server without a a pool and accessed directly to create tokens. There are a few available fields in the generator: account, pre-defined secret, and a QR code option. The “account” field defines how to label the soft token within the user’s mobile device and can be useful if the user has multiple soft token on the same device (I have 3 and need to label them to keep them straight). A 10-byte string can be used as a pre-defined secret for conversion to a base32-encoded key. We will advise you against using a pre-defined key because a key known to the user is something they know (as opposed to something they have) and could be potentially regenerate out-of-band thereby nullifying the benefits of two-factor authentication. Lastly, there is an option to generate a QR code by sending an HTTPS request to Google and returning the QR code as an image. While this is convenient, this could be seen as insecure since it may wind up in Google’s logs somewhere. You’ll have to decide if that is a risk you’re willing to take for the convenience it provides. Once the token has been generated, it will need to be added to a data group on the BIG-IP: Navigate to Local Traffic > iRules > Data Group Lists Select Create from the upper right-hand corner if the data group does not yet exist. If it exists, just select it from the list. Name the data group “google_auth_keys” (data group name can be changed in the beginning section of the iRule) The type of data group will be String Type the “username” into the String field and paste the “Google Authenticator key” into the Value field Click Add and the username/key pair should appear in the list as such: user := ONSWG4TFOQYTEMZU Click Finished when all your username/key pairs have been added. Your user can scan the QR code or type it into their device manually. After they scan the QR code, the account name should appear along with the TOTP for the account. The image below is how the soft token appears in the Google Authenticator iPhone application: Once again, do not let the user leave with a copy of the plain text key. Knowing their key value will negate the value of having the token in the first place. Once the key has been added to the BIG-IP, the user’s device, and they’ve tested their access, destroy any reference to the key outside the BIG-IPs data group.If you’re worried about having the keys in plain text on the BIG-IP, they can be encrypted with AES or stored off-box in LDAP and only queried via secure connection. This is beyond the scope of this article, but doable with iRules. Code Google Authenticator Token Verification iRule for APM – Documentation and code for the iRule used in this Tech Tip Google Authenticator Soft Token Generator iRule – iRule for generating soft tokens for users Sample Access Policy: Two-Factor Authentication With Google Authenticator And APM – APM access policy Reference Materials RFC 4226 - HOTP: An HMAC-Based One-Time Password Algorithm RFC 2104 - HMAC: Keyed-Hashing for Message Authentication RFC 4648 - The Base16, Base32, and Base64 Data Encodings SOL3122: Configuring the BIG-IP system to use an NTP server using the Configuration utility – Information on configuring time servers Configuration Guide for BIG-IP Access Policy Manager – The “big book” on APM configurations Configuring Authentication Using AAA Servers – Official F5 documentation for configuring AAA servers for APM Troubleshooting AAA Configurations – Extra help if you hit a snag configuring your AAA server14KViews6likes28CommentsLTM External Monitors: The Basics
LTM's external monitors are incredibly flexible, fairly easy to implement, and especially useful for monitoring applications for which there is no built-in monitor template. They give you the ability to effectively monitor the health of just about any application by writing custom scripts to interact with your servers in the same way users would. In this article, I will attempt to explain the basic LTM external monitoring paradigm, then dissect and explain one of the sample monitors from the Advanced Design & Config codeshare. (Thanks to poster pgroven for inspiring me to finally write this up.) An "External Monitor" is a script that is "external" to the configuration file which contains specific logic designed to interact with your servers to verify the health of load balanced services. LTM runs a unique instance of the custom-crafted script against each pool member to which it is applied, passing command line arguments and environment variables as specified in the monitor definition calling the script. The script logic formulates and submits a request (or requests) to the target pool member, evaluates the response(s), and manages the pool member's availability based on the results of the response evaluation. The Tools The sample monitor scripts The external script itself should be a shell script (if at all possible) to minimize overhead. If absolutely necessary, a perl script may be used instead, but keep in mind that the overhead of invoking the intepreter and required modules for multiple instances may negatively impact performance overall. However, LTM was not intended to be a development platform or a dedicated monitoring device, and thus has a limited set of development tools and modules included in the software build, so you may not find the perl modules you need. You can add them, but it is not recommended or supported to do so, and those customizations will likely not survive an upgrade. (You can also use an external monitor to invoke a compiled program, but that discussion is beyond the scope of this article.) cURL cURL is a very flexible command line tool you can use in shell and perl scripts for complex interactions with HTTP and FTP servers. netcat netcat is another useful command line tool that facilitates interaction with TCP and UDP services. The LTM external monitor template The LTM external monitor template allows you to specify the name of the script to run, the interval & timeout, command line arguments and variables the script requires, and alternate destination for the monitor traffic. The Tips ("good to know" stuff and best practices recommendations) There are a few special considerations you need to make when writing the script and configuring the LTM monitor definition that calls it. Do you really need an external monitor? Never use an external monitor when a built-in one will work as well. Forking a shell and running even the simplest shell script takes a significant amount of system resources, so external monitors should be avoided whenever possible. If possible, have the server administrator script execution of the required transaction on the server itself (or locate/author an alternative script on the server) that reliably reflects its availability. Then, instead of an external monitor, you can define a built-in monitor that requests that dynamic script from the server, and let the server run the script locally and report results. For example, the simple request/response HTTP transaction in the sample script below would be much better implemented using the built in basic HTTP monitor. Optimization Use the lowest overhead tools, make the simplest possible request, & minimize the amount of response parsing required to determine the pool member's status. The logic of the script can contain just about any logic you want to determine if that server is healthy. You can use commandline tools like netcat and cURL to replicate server transactions, from a basic request and response parsing for an expected string, to more complicated exchanges where cookies or persistence tokens are used, login is required or some other dynamic transaction must take place in order to establish that usability of a server by its intended users. Redundant pairs Both units in a redundant pair will independently run the configured monitor, even when running as Standby. Monitor status is not shared between redundant pairs. Variables Variables may be passed to indicate service or hostname, the URI you need to request, or just about any piece of information that would be needed to construct a valid query and receive a valid response from the server. Variables can contain static values, basic regex expressions, or even expressions that contain other variables. As long as your script receives the expected variables from the monitor definition and the logic handles them appropriately, the possibilities are fairly limitless. Authentication If your script must pass authentication tokens to the pool members to sufficiently transact with them, make sure the authentication method will allow multiple concurrent logins. Each pool-member-specific instance on each member of a redundant pair may attempt to log in simultaneously. If only a single login is allowed per credential, authentication collisions will most likely result in rolling multiple concurrent false downs as only one monitor request can succeed at a time. Script against one pool member The script should be written to determine the health of one specific pool member. An LTM monitor script is really a template for monitoring a single pool member. Whether you apply an LTM monitor to an individual pool member or to the entire pool in the GUI, a separate copy of the monitor runs for each pool member, passing only that specific IP & port to be tested and maintaining only that single tested pool member's availability. (Discrete monitoring of a single pool member by an external monitor isespecially important if other monitors will also be applied to the pool members.) Minimize the work Keep the amount of work your monitor script must perform as small as possible. Both the script that runs on LTM and the request against the server itself should represent the minimum interaction required to adequately determine the server's health. If you consider how often the monitor will make that request against each pool members, you can get an idea of the scale of the work that you're asking both big IP and your servers to do. The Ins & Outs A script intended for use as an external monitor must conform to some specific input and output requirements. Command line arguments IP and port of the pool member are passed automatically as the first 2 command line arguments for all external monitors. The IP address is always passed in the IPv6 format (TMOS' internal address format). IPv4 addresses are passed using IPv6's special "IPv4 mapped address" transition notation: The IPv4 address prefixed with "::ffff:". In that notation, the IP address for pool member 10.0.0.1:80 would be "::ffff:10.0.0.1". The proper address type is critical to proper operation of your monitor script. More on that later. Additional command line arguments may be defined in the monitor configuration. When defined, they are passed to the script by the monitoring daemon as the 3rd, 4th, and subsequent arguments. Variables Variables in the form of Name/Value pairs may also be defined in the monitor configuration. When defined, they are created as environment variables in the shell forked for each instance of the script. Script Output IF ANY VALUE AT ALL GOES TO STANDARD OUTPUT, THE POOL MEMBER WILL BE MARKED UP. If the pool member is determined to be healthy enough to receive load balance traffic by successfully satisfying the script logic, the script should output any value but null to standard output, and the monitoring daemon will mark the pool member up. If the pool member does not respond as expected, the script will output nothing to stdout, and the lack of output will cause the monitoring daemon to mark the pool member down at the expiration of the timeout. All other outputs from the script are ignored by the monitoring daemon. The Timing The interval is the amount of time that will elapse between the start of each monitor attempt. In order to avoid creating a Denial of Service situation by sending your servers excessive monitor traffic, you should increase the interval as much as possible. The interval MUST be longer that the longest possible healthy response should take, since each successive instance of the script run against a pool member will kill off any already-running previous instances, assuming they are hung and will never complete. F5 recommends a timeout value 3 times greater than the interval value plus 1 second, but you can use a different ratio if necessary. Setting the timeout shorter than the interval is not recommended. If you consider the monitor will make that request every <interval> against each pool member, you can get an idea of the scale of the work that you're asking both LTM and your servers to do, so some careful testing is in order with the goal of minimizing the timeout value and maximizing the interval. (If you notice that your healthy pool members are being marked down and then back up again on the next interval, your timeout may be to short, and some further experimentation may be in order.) There are also ways that you can control and tighten up the tolerance for timing in some monitors. In another article, we will take a closer look at a different external monitor that marks pool members down a little bit more aggressively than waiting for the monitor timeout. The Gory Details Here's a sample monitor from the codeshare: HTTPMonitor_cURL_BasicGET Let's go through the script a section at a time and take a closer look at what's going on. First of all, notice that the script documentation tells us it is expecting 2 variable definitions: # This example expects the following Name/Value pairs: # URI = the URI to request from the server # RECV = the expected response (not case sensitive) For this example, we are going to request the URI "/testpath/testfile.html" over HTTP for each server, and expect a string that says "Server is UP!!!". (As noted earlier, the simple request / response HTTP transaction demonstrated here would be much better implemented using the built in basic HTTP monitor with static request/receive strings, but is still helpful in demonstrating the basic requirements for external monitor implementation.) Now that we know what variables we need to define, the monitor configuration will look like this: monitor ExternalHTTP { defaults from external RECV "Server is UP!!!" run "" URI "/testpath/testfile.html" } Once the monitor is defined, it can be applied to the pool members. (The monitor can be applied to individual pool members or the entire pool. Either way, a unique instance of the script is run for each pool member at each interval to monitor each pool member independently.) When the monitoring daemon (bigd) runs the script according to the monitor definition, it forks a new shell and and creates the required environment variables, then invokes the script with the 2 default command line arguments (the target pool member's IP address and port). At the start of the script, the command line arguments are processed. First it checks if the IPv6 address passed is in the IPv4 mapped format, and if so, converts it to a standard IPv4 address instead, and assigns both arguments to named environment variables: # remove IPv6/IPv4 compatibility prefix (LTM passes addresses in IPv6 format) IP=`echo ${1} | sed 's/::ffff://'` PORT=${2} Once the IP and PORT variables are defined, they are used to set up a process management scheme intended to prevent multiple copies of the monitor from running against the same pool member at the same time. It works like this: Each instance of the script first looks for a unique file named "monitorname.IP_port.pid" in /var/run containing the process ID of the last instance of the script run against that pool member. If it exists, it means the last instance of the script has not completed. Since multiple copies of the same script funning against the same pool member may interfere with proper monitor operations, the script kills that process, then re-writes the PID file containing the process ID of the current instance for reference by the next instance. PIDFILE="/var/run/`basename ${0}`.${IP}_${PORT}.pid" # kill of the last instance of this monitor if hung and log current pid if [ -f $PIDFILE ] then kill -9 `cat $PIDFILE` > /dev/null 2>&1 fi echo "$$" > $PIDFILE Now the heavy lifting begins. In this example, we're simply sending a URI, and examining the response to see if it contains the RECV string: # send request & check for expected response curl -fNs http://${IP}:${PORT}${URI} | grep -i "${RECV}" 2>&1 > /dev/null (Remember this is a simplified example. In a real world example, the logic inserted here would replicate whatever transactions you identified earlier as the minimum required interaction to determine the pool member's health. cURL has a wide range of options you can use to mimic almost any browser operation, including sending and receiving cookies, to replicate multi-step transactions or validate complex responses.) If the expected response contained the value of the RECV variable, the "grep" command will return 0, causing the script to send the string "UP" to stdout, and the pool member will be marked up immediately. If the expected response did NOT contain the value of the RECV variable, the "grep" command will return a non-zero value, and the script will output nothing to stdout, and the pool member will be marked down when the timeout expires. # mark node UP if expected response was received if [ $? -eq 0 ] then echo "UP" fi And finally the script will delete the PID file written earlier (since it has finished cleanly and won't need to be killed of by the next instance) and then exit: rm -f $PIDFILE exit It doesn't work... what now? Troubleshooting external monitors can be challenging. In my next article, I'll cover the basic process you can follow to track down and resolve any issues that may interfere with proper monitor operation. (LTM External Monitors: Troubleshooting)11KViews0likes5CommentsTroubleshooting TLS Problems With ssldump
Introduction Transport Layer Security (TLS) is used to secure network communications between two hosts. TLS largely replaced SSL (Secure Sockets Layer) starting in 1999, but many browsers still provide backwards compatibility for SSL version 3. TLS is the basis for securing all HTTPS communications on the Internet. BIG-IP provides the benefit of being able to offload the encryption and decryption of TLS traffic onto a purpose specific ASIC. This provides performance benefits for the application servers, but also provides an extra layer for troubleshooting when problems arise. It can be a daunting task to tackle a TLS issue with tcpdump alone. Luckily, there is a utility called ssldump. Ssldump looks for TLS packets and decodes the transactions, then outputs them to the console or to a file. It will display all the components of the handshake and if a private key is provided it will also display the encrypted application data. The ability to fully examine communications from the application layer down to the network layer in one place makes troubleshooting much easier. Note: The user interface of the BIG-IP refers to everything as SSL with little mention of TLS. The actual protocol being negotiated in these examples is TLS version 1.0, which appears as “Version 3.1” in the handshakes. For more information on the major and minor versions of TLS, see the TLS record protocol section of the Wikipedia article. Overview of ssldump I will spare you the man page, but here are a few of the options we will be using to examine traffic in our examples: ssldump -A -d -k <key file> -n -i <capture VLAN> <traffic expression> -A Print all fields -d Show application data when private key is provided via -k -k Private key file, found in /config/ssl/ssl.key/; the key file can be located under client SSL profile -n Do not try to resolve PTR records for IP addresses -i The capture VLAN name is the ingres VLAN for the TLS traffic The traffic expression is nearly identical to the tcpdump expression syntax. In these examples we will be looking for HTTPS traffic between two hosts (the client and the LTM virtual server). In this case, the expression will be "host <client IP> and host <virtual server IP> and port 443”. More information on expression syntax can be found in the ssldump and tcpdump manual pages. *the manual page can be found by typing 'man ssldump' or online here <http://www.rtfm.com/ssldump/Ssldump.html> A healthy TLS session When we look at a healthy TLS session we can see what things should look like in an ideal situation. First the client establishes a TCP connection to the virtual server. Next, the client initiates the handshake with a ClientHello. Within the ClientHello are a number of parameters: version, available cipher suites, a random number, and compression methods if available. The server then responds with a ServerHello in which it selects the strongest cipher suite, the version, and possibly a compression method. After these parameters have been negotiated, the server will send its certificate completing the the ServerHello. Finally, the client will respond with PreMasterSecret in the ClientKeyExchange and each will send a 1 byte ChangeCipherSpec agreeing on their symmetric key algorithm to finalize the handshake. The client and server can now exchange secure data via their TLS session until the connection is closed. If all goes well, this is what a “clean” TLS session should look like: New TCP connection #1: 10.0.0.10(57677) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>S Handshake ClientHello Version 3.1 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [more cipher suites] TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 2 0.0012 (0.0001) S>C Handshake ServerHello Version 3.1 session_id[0]= cipherSuite TLS_RSA_WITH_AES_256_CBC_SHA compressionMethod NULL 1 3 0.0012 (0.0000) S>C Handshake Certificate 1 4 0.0012 (0.0000) S>C Handshake ServerHelloDone 1 5 0.0022 (0.0010) C>S Handshake ClientKeyExchange 1 6 0.0022 (0.0000) C>S ChangeCipherSpec 1 7 0.0022 (0.0000) C>S Handshake Finished 1 8 0.0039 (0.0016) S>C ChangeCipherSpec 1 9 0.0039 (0.0000) S>C Handshake Finished 1 10 0.0050 (0.0010) C>S application_data 1 0.0093 (0.0000) S>C TCP FIN 1 0.0093 (0.0000) C>S TCP FIN Scenario 1: Virtual server missing a client SSL profile The client SSL profile defines what certificate and private key to use, a key passphrase if needed, allowed ciphers, and a number of other options related to TLS communications. Without a client SSL profile, a virtual server has no knowledge of any of the parameters necessary to create a TLS session. After you've configured a few hundred HTTPS virtuals this configuration step becomes automatic, but most of us mortals have missed step at one point or another and left ourselves scratching our heads. We'll set up a test virtual that has all the necessary configuration options for an HTTPS profile, except for the omission of the client SSL profile. The client will open a connection to the virtual on port 443, a TCP connection will be established, and the client will send a 'ClientHello'. Normally the server would then respond with ServerHello, but in this case there is no response and after some period of time (5 minutes is the default timeout for the browser) the connection is closed. This is what the ssldump would look like for a missing client SSL profile: New TCP connection #1: 10.0.0.10(46226) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>SV3.1(84) Handshake ClientHello Version 3.1 random[32]= 4c b6 3b 84 24 d7 93 7f 4b 09 fa f1 40 4f 04 6e af f7 92 e1 3b a7 3a c2 70 1d 34 dc 9d e5 1b c8 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [a number of other cipher suites] TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 299.9883 (299.9871) C>S TCP FIN 1 299.9883 (0.0000) S>C TCP FIN Scenario 2: Client and server do not share a common cipher suite This is a common scenario when really old browsers try to connect to servers with modern cipher suites. We have purposely configured our SSL profile to only accept one cipher suite (TLS_RSA_WITH_AES_256_CBC_SHA in this case). When we try connect to the virtual using a 128-bit key, the connection is immediately closed with no ServerHello from the virtual server. The differentiator here, while small, is the quick closure of the connection and the ‘TCP FIN’ that arises from the server. This is unlike the behavior of the missing SSL profile, because the server initiates the connection teardown and there is no connection timeout. The differences, while subtle, hint at the details of the problem: New TCP connection #1: 10.0.0.10(49342) <-> 10.0.0.20(443) 1 1 0.0010 (0.0010) C>SV3.1(48) Handshake ClientHello Version 3.1 random[32]= 4c b7 41 87 e3 74 88 ac 89 e7 39 2d 8c 27 0d c0 6e 27 da ea 9f 57 7c ef 24 ed 21 df a6 26 20 83 cipher suites TLS_RSA_WITH_AES_128_CBC_SHA Unknown value 0xff compression methods unknown value NULL 1 0.0011 (0.0000) S>C TCP FIN 1 0.0022 (0.0011) C>S TCP FIN Conclusion Troubleshooting TLS can be daunting at first, but an understanding of the TLS handshake can make troubleshooting much more approachable. We cannot exhibit every potential problem in this tech tip. However, we hope that walking through some of the more common examples will give you the tools necessary to troubleshoot other issues as they arise. Happy troubleshooting!7.8KViews0likes5CommentsTwo-Factor Authentication With Google Authenticator And LDAP
Introduction Earlier this year Google released their time-based one-time password (TOTP) solution named Google Authenticator. A TOTP is a single-use code with a finite lifetime that can be calculated by two parties (client and server) using a shared secret and a synchronized clock (see RFC 4226 for additional information). In the case of Google Authenticator, the TOTP are generated using a software (soft) token on a mobile device. Google currently offers applications for the Apple iPhone, Android-based devices, and Blackberry handsets. A user authenticating with a Google Authenticator-enabled service will require the possession of this software token. In order for the token to be effective, it must not be able to be duplicated and the shared secret should be closely guarded. Google Authenticator’s soft token solution offer a number of advantages over other commercially available solutions. It is free to use (all applications are free to download), the TOTP algorithm is open source, well-known, and well-tested, and finally it does not require a dedicated server for processing tokens. While certain potential weakness in SHA-1 have been identified, none of them can be exploited within the 30-second timeframe of the TOTP’s usability. For all intents and purposes, SHA-1 is reasonably secure, well-tested, and purpose-appropriate for this application. The algorithm however is only as secure as the users and administrators are at protecting the shared secret used in token processing. Calculating The Google Authenticator TOTP The Google Authenticator TOTP is calculated by generating an HMAC-SHA1 token, which uses a 10-byte base32-encoded shared secret as a key and Unix time (epoch) divided into a 30 second interval as inputs. The resulting 80-byte token is converted to a 40-character hexadecimal string, the least significant (last) hex digit is then used to calculate a 0-15 offset. The offset is then used to read the next 8 hex digits from the offset. The resulting 8 hex digits are then AND’d with 0x7FFFFFFF (2,147,483,647), then the modulo of the resultant integer and 1,000,000 is calculated, which produces the correct code for that 30 seconds period. Base32 encoding and decoding were covered in my previous Tech Tip titled Base32 Encoding And Decoding With iRules . The Tech Tip details the process for decoding a user’s base32-encoded key to binary as well as converting a binary key to base32. The HMAC-SHA256 token calculation iRule was originally submitted by Nat to the Codeshare on DevCentral. The iRule was slightly modified to support the SHA-1 algorithm, but is otherwise taken directly from the pseudocode outlined in RFC 2104. These two pieces of code contribute the bulk of the processing of the Google Authenticator code. The rest is done with simple bitwise and arithmetic functions. Google Authenticator Two-Factor Authentication Process Installing Google Authenticator Two-Factor Authentication The installation of Google Authenticator two-factor authentication on your BIG-IP is divided into six sections: creating an LDAP authentication configuration, configuring an LDAP (Active Directory) authentication profile, testing your authentication profile, adding the Google Authenticator iRule and “user_to_google_auth” mapping data group, attaching iRule to the authentication profile, and finally generating soft tokens for your users. The process is broken out into steps as trying to complete all the sections in tandem can be difficult to troubleshoot. Creating An LDAP (Active Directory) Authentication Configuration The LDAP profile we will configure will be extremely basic: no SSL, no Active Directory, etc. A detailed walkthrough for more advanced deployments can be found in our best practices guide: Configuring LDAP remote authentication for Active Directory . 1. Login to your BIG-IP using administrator credentials 2. Navigate to Local Traffic > Profiles > Authentication > Configurations 3. Click “Create” in the upper right-hand corner 4. Select “LDAP” from the “Type” drop-down menu 5. Now fill in the fields with your environment-specific values: Name: ldap.f5test.local Type: LDAP Remote LDAP Tree: dc=f5test, dc=local Host(s): <IP address(es) of LDAP server(s)> Service Port: 389 (default) LDAP Version: 3 (default) Bind DN: cn=ldap_bind_acct, dc=f5test, dc=local (if your LDAP server allows anonymous binds you may not need this option) Bind Password: <admin password> Confirm Bind Password: <admin password> 6. Click “Finished” to save the configuration Configuring An LDAP (Active Directory) Authentication Profile 1. Navigate to Local Traffic > Profiles > Authentication > Profiles 2. Click “Create” in the upper right-hand corner 3. Select “LDAP” from the “Type” drop-down menu 4. Fill in fields with appropriate values: Name: ldap.f5test.local Type: LDAP Configuation: ldap.f5test.local (select previously named configuration from drop-down) Rule: (leave this unchecked and not enabled for now, but this is where we will enable the Google Authenticator iRule shortly) 5. Click “Finished” Test Your Authentication Profile 1. Create a basic HTTP virtual server with your LDAP authentication profile enabled on the virtual 2. Access your virtual from a web browser and you should be prompted with an HTTP Basic Authentication credential form 3. Test with known-working credentials, if everything works you’re good to go, if not you’ll need to troubleshoot the authentication issue Adding the Google Authenticator iRule 1. Go to the DevCentral Codeshare and download the Google Authenticator iRule 2. Navigate to Local Traffic > iRules > iRule List 3. Click “Create” in the upper right-hand corner 4. Name your iRule “google_authenticator_plus_ldap_two_factor” and paste the iRule into “Definition” section 5. Click “Finished” when you’re done Attaching The Google Authenticator iRule To Your Authentication Profile 1. Go back to the “Authentication Profile” section by browsing to Local Traffic > Profiles > Authentication > Profiles 2. Select your LDAP profile from the list 3. Now attach select the “google_authenticator_plus_ldap_two_factor” iRule from the “Rule” drop-down 4. Click “Finished” Generating Software Tokens For Users In addition to the Google Authenticator iRule we also wrote a Google Authenticator Soft Token Generator iRule that will generate soft tokens for your users. The iRule can be added directly to an HTTP virtual server without a a pool and accessed directly to create tokens. There are a few available fields in the generator: account, pre-defined secret, and a QR code option. The “account” field defines how to label the soft token within the user’s mobile device and can be useful if the user has multiple soft token on the same device (I have 3 and need to label them to keep them straight). A 10-byte string can be used as a pre-defined secret for conversion to a base32-encoded key. We will advise you against using a pre-defined key because a key known to the user is something they know (as opposed to something they have) and could be potentially regenerate out-of-band thereby nullifying the benefits of two-factor authentication. Lastly, there is an option to generate a QR code by sending an HTTPS request to Google and returning the QR code as an image. While this is convenient, this could be seen as insecure since it may wind up in Google’s logs somewhere. You’ll have to decide if that is a risk you’re willing to take for the convenience it provides. Once the token has been generated, it will need to be added to a data group on the BIG-IP: 1. Navigate to Local Traffic > iRules > Data Group Lists 2. Select “Create” from the upper right-hand corner if the data group does not yet exist. If it exists, just select it from the list. 3. Name the data group “user_to_google_auth” (data group name can be changed in the RULE_INIT section of the Google Authenticator iRule) 4. The type of data group will be “string” 5. Type the “username” into the “string” field and paste the “Google Authenticator key” into the “value” field 6. Click “Add” and you the username/key pair should appear in the list as such: user := ONSWG4TFOQYTEMZU 7. Click “Finished” when all your username/key pairs have been added. Your user can scan the QR code or type it into their device manually. After they scan the QR code, the account name should appear along with the TOTP for the account. The image below is how the soft token appears in the Google Authenticator iPhone application: Once again, do not let the user leave with a copy of the plain text key. Knowing their key value will negate the value of having the token in the first place. Once the key has been added to the BIG-IP, the user’s device, and they’ve tested their access, destroy any reference to the key outside the BIG-IPs data group.If you’re worried about having the keys in plain text on the BIG-IP, they can be encrypted with AES or stored off-box in LDAP and only queried via secure connection. This is beyond the scope of this article, but doable with iRules. Testing and Troubleshooting There are a lot of moving pieces in this iRule so troubleshooting can be a bit daunting at first glance, but because all of the pieces can be separated into their constituents the problem is usually identified quickly. There are five pieces that make up this solution: the LDAP service, the BIG-IP LDAP profile, the Google Authenticator iRule, the “user_to_google_auth” mapping data group, and finally the soft token. Try to separate them from each other to expedite the troubleshooting process. Here are a few helpful hints in troubleshooting potential issues: 1. Are all the clocks synchronized? The BIG-IP and LDAP server can be tested from the command line by running ‘ntpdate –q pool.ntp.org’. If the clocks are more than a few milliseconds off, they’ll need to be adjusted. An NTP server should be configured for all devices. Likewise the user’s mobile device must be configured to use network time or else the calculated value will always be wrong. Remember that timezones do not matter when using Unix time. 2. Is basic LDAP working without the iRule attached? Before ever touching any of the Google Authenticator related iRules, data groups, devices, etc. your LDAP configuration should be in working order. If you’re having problems finding the issue, enable “debug logging” at the bottom of the LDAP authentication configuration page on your BIG-IP and tail the logs on your LDAP server. Revisit the best practices guide if you are still unsure about any configuration options. 3. Turn on (or increase) logging for Google Authenticator iRule. In the RULE_INIT section of the Google Authenticator iRule, there is a debug logging option. Set it to ‘2’ and all actions from the iRule will be logged to /var/log/ltm. If you see one particular area that is consistently hanging, investigate it further. Conclusion With every passing day system security becomes a greater concern. Today’s attacks are far more sophisticated and costly than those of days past. With all the stories of stolen laptops and other devices in the field, it is a little easier to sleep as a systems administrator knowing that a tech-aware thief has one more hurdle to surpass in an effort to compromise your infrastructure. The implementation costs of deploying two-factor authentication with Google Authenticator in an existing F5 infrastructure are very low assuming your employees have company-issued mobile devices. The cost can be deduced to the man hours required to install this iRule and generate tokens for your users. The cost is almost certainly less than that of a single incident of a compromise account. Until next time, batten down the hatches and get that two-factor project underway that’s been on the backburner for two years. Code and References Google Authenticator iRule – Documentation and code for the iRule used in this Tech Tip Google Authenticator Soft Token Generator iRule – iRule for generating soft tokens for users RFC 4226 - HOTP: An HMAC-Based One-Time Password Algorithm RFC 2104 - HMAC: Keyed-Hashing for Message Authentication RFC 4648 - The Base16, Base32, and Base64 Data Encodings SOL11072 - Configuring LDAP remote authentication for Active Directory7.1KViews1like12CommentsMonitoring TCP Applications #01
LTM has built-in application health monitor templates for many TCP-based application protocols (FTP, HTTP, HTTPS, IMAP, LDAP, MSSQL, NNTP, POP3, RADIUS, RTSP, RPC, SASP, SIP, SMB, SMTP, SOAP). If you need to monitor an application which depends on an upper layer protocol for which there is not a built-in monitor template, LTM provides a number of options to build a monitor based on the underlying transport layer protocol-- TCP. I'll cover each of those options in a separate article, starting here with the built-in "tcp" and "tcp_half_open" monitor types. Overview: tcp and tcp_half_open Both monitor types attempt to verify the availability of a service by making a TCP connection on the appropriate port. There are only a couple of differences between the tcp and the tcp_half_open monitors: monitor type reverse/transparent connection handling transact with service? tcp ECV yes (optional) full open, full close yes (optional) tcp_half_open EAV no half open, RST close no Both have the same standard monitor configuration options of interval, timeout, and alias address/port (for more on those options, and on reverse & transparent options, see the LTM manual section on Configuring Monitors.) As you will see below, some of the differences are significant and may dictate which monitor is most appropriate for your application. Monitor Type "tcp" The tcp monitor is useful for a couple of different scenarios: Monitoring services that you can't transact with, but want to verify the availability of the socket and close the connection properly (routers, firewalls); or Monitoring services with which you can transact a quick request/response in cleartext after the TCP handshake to verify service availability (telnet is abasic example, but the same concept applies to any other text-based protocol). How it works In summary, a monitor of type tcp attempts to send and/or receive specific content over a TCP connection. The check is successful when the server response contains the Receive String value. A tcp monitor may optionally be configured with a Send String value and a Receive String value. If the Send String value is blank and a connection is successfully established, the service is considered up. A blank Receive String value matches any response. The default tcp monitor, with no Send string or Receive string configured tests a service by establishing a TCP connection with the pool member on the configured service port and then immediately closing the connection without sending any data on the connection. This causes some services such as telnet and ssh to log a connection error, filling up the server logs with unnecessary errors. To eliminate the extraneous logging, you can configure the tcp monitor to send enough data to the service to make it happy, or just use the tcp_half_open monitor. Depending on your monitoring requirements, you may also be able to monitor a service that expects empty connections, such as tcp_echo (by using the default tcp_echo monitor) or daytime (specifying the appropriate alias service port when customizing the tcp monitor template). Here are the details of a tcp monitor in action, including the option for sending data and evaluating the response: 1. The tcp monitor will perform a normal 3-way TCP handshake. 2. If no Send string is configured, the pool member will be marked UP upon successful completion of the 3-way handshake. If a Send string is configured, it will be sent to the server. 3. If the server fails to respond before the timeout, the pool member is marked DOWN. If the server does respond before the timeout, the server response is compared with the Receive string: If no Receive string is configured, the pool member is marked UP; if a Receive string is configured, and the response contains the Receive string, the pool member will be marked UP. If the response does not contain the Receive string, the pool member will be marked DOWN. 4. If the server resets the connection during the handshake or before an expected response is received, the pool member will be marked DOWN and the connection is torn down immediately. In all other cases, the connection will be closed with a normal 4-way close. handshake successful? | | no yes | | DOWN send string configured/sent? | | no yes | | UP server response? | | | close no yes | | DOWN recv string configured? | | | close no yes | | UP recv string match response? | | | close no yes | | DOWN UP | | close close Monitor Type "tcp_half_open" The tcp_half_open monitor is most widely used for gateway monitoring when you just need to ensure the socket is responding to connection requests and desire the lowest overhead on the monitoring target. For example, a busy router would be less impacted by a half open connection request that is immediately reset than a connection that completes the entire open and close handshake sequence. (Although this approach minimizes the impact of monitoring on the monitoring target, it's important to know that the tcp_half_open monitor uses more of LTM's memory than the tcp monitor does, since the tcp_half_open monitor is an EAV that runs a small script outside of TMM, while the tcp monitor is an ECV internal to TMM.) Another common use for the tcp_half_open monitor is to prevent the application from spewing a bunch of log messages indicating connections were opened but not used. For example, one consultant recently told me he uses the tcp_half_open monitor to verify sshd is alive and answering without filling up /var/log/secure. Telnet has similar issues with connections on which no data is sent. It should be noted that some applications cannot gracefully handle the half open connection and subsequent reset, so some testing may be in order before implementing this monitor. How it works The tcp_half_open monitor sends a SYN packet to the pool member, and if a SYN-ACK is received from the server in response, the pool member is marked UP. SYN sent | SYN/ACK rec'd? | | no yes | | DOWN** UP | RST sent **Not fully functional in some versions: SOL7362: The BIG-IP tcp_half_open monitor does not mark the service as DOWN after receiving a RST packet from the pool member More info LTM manual: Configuring Monitors Get the Flash Player to see this player.6.6KViews0likes2CommentsOne Time Passwords via an SMS Gateway with BIG-IP Access Policy Manager
One time passwords, or OTP, are used (as the name indicates) for a single session or transaction. The plus side is a more secure deployment, the downside is two-fold—first, most solutions involve a token system, which is costly in management, dollars, and complexity, and second, people are lousy at remembering things, so a delivery system for that OTP is necessary. The exercise in this tech tip is to employ BIG-IP APM to generate the OTP and pass it to the user via an SMS Gateway, eliminating the need for a token creating server/security appliance while reducing cost and complexity. Getting Started This guide was developed by F5er Per Boe utilizing the newly released BIG-IP version 10.2.1. The “-secure” option for the mcget command is new in this version and is required in one of the steps for this solution. Also, this solution uses the Clickatell SMS Gateway to deliver the OTPs. Their API is documented at http://www.clickatell.com/downloads/http/Clickatell_HTTP.pdf. Other gateway providers with a web-based API could easily be substituted. Also, there are steps at the tail end of this guide to utilize the BIG-IP’s built-in mail capabilities to email the OTP during testing in lieu of SMS. The process in delivering the OTP is shown in Figure 1. First a request is made to the BIG-IP APM. The policy is configured to authenticate the user’s phone number in Active Directory, and if successful, generate a OTP and pass along to the SMS via the HTTP API. The user will then use the OTP to enter into the form updated by APM before allowing the user through to the server resources. BIG-IP APM Configuration Before configuring the policy, an access profile needs to be created, as do a couple authentication servers. First, let’s look at the authentication servers Authentication Servers To create servers used by BIG-IP APM, navigate to Access Policy->AAA Servers and then click create. This profile is simple, supply your domain server, domain name, and admin username and password as shown in Figure 2. The other authentication server is for the SMS Gateway, and since it is an HTTP API we’re using, we need the HTTP type server as shown in Figure 3. Note that the hidden form values highlighted in red will come from your Clickatell account information. Also note that the form method is GET, the form action references the Clickatell API interface, and that the match type is set to look for a specific string. The Clickatell SMS Gateway expects the following format: https://api.clickatell.com/http/sendmsg?api_id=xxxx&user=xxxx&password=xxxx&to=xxxx&text=xxxx Finally, successful logon detection value highlighted in red at the bottom of Figure 3 should be modified to response code returned from SMS Gateway. Now that the authentication servers are configured, let’s take a look at the access profile and create the policy. Access Profile & Policy Before we can create the policy, we need an access profile, shown below in Figure 4 with all default settings. Now that that is done, we click on Edit under the Access Policy column highlighted in red in Figure 5. The default policy is bare bones, or as some call it, empty. We’ll work our way through the objects, taking screen captures as we go and making notes as necessary. To add an object, just click the “+” sign after the Start flag. The first object we’ll add is a Logon Page as shown in Figure 6. No modifications are necessary here, so you can just click save. Next, we’ll configure the Active Directory authentication, so we’ll add an AD Auth object. Only setting here in Figure 7 is selecting the server we created earlier. Following the AD Auth object, we need to add an AD Query object on the AD Auth successful branch as shown in Figures 8 and 9. The server is selected in the properties tab, and then we create an expression in the branch rules tab. To create the expression, click change, and then select the Advanced tab. The expression used in this AD Query branch rule: expr { [mcget {session.ad.last.attr.mobile}] != "" } Next we add an iRule Event object to the AD Query OK branch that will generate the one time password and provide logging. Figure 10 Shows the iRule Event object configuration. The iRule referenced by this event is below. The logging is there for troubleshooting purposes, and should probably be disabled in production. 1: when ACCESS_POLICY_AGENT_EVENT { 2: expr srand([clock clicks]) 3: set otp [string range [format "%08d" [expr int(rand() * 1e9)]] 1 6 ] 4: set mail [ACCESS::session data get "session.ad.last.attr.mail"] 5: set mobile [ACCESS::session data get "session.ad.last.attr.mobile"] 6: set logstring mail,$mail,otp,$otp,mobile,$mobile 7: ACCESS::session data set session.user.otp.pw $otp 8: ACCESS::session data set session.user.otp.mobile $mobile 9: ACCESS::session data set session.user.otp.username [ACCESS::session data get "session.logon.last.username"] 10: log local0.alert "Event [ACCESS::policy agent_id] Log $logstring" 11: } 12: 13: when ACCESS_POLICY_COMPLETED { 14: log local0.alert "Result: [ACCESS::policy result]" 15: } On the fallback path of the iRule Event object, add a Variable Assign object as show in Figure 10b. Note that the first assignment should be set to secure as indicated in image with the [S]. The expressions in Figure 10b are: session.logon.last.password = expr { [mcget {session.user.otp.pw}]} session.logon.last.username = expr { [mcget {session.user.otp.mobile}]} On the fallback path of the AD Query object, add a Message Box object as shown in Figure 11 to alert the user if no mobile number is configured in Active Directory. On the fallback path of the Event OTP object, we need to add the HTTP Auth object. This is where the SMS Gateway we configured in the authentication server is referenced. It is shown in Figure 12. On the fallback path of the HTTP Auth object, we need to add a Message Box as shown in Figure 13 to communicate the error to the client. On the Successful branch of the HTTP Auth object, we need to add a Variable Assign object to store the username. A simple expression and a unique name for this variable object is all that is changed. This is shown in Figure 14. On the fallback branch of the Username Variable Assign object, we’ll configure the OTP Logon page, which requires a Logon Page object (shown in Figure 15). I haven’t mentioned it yet, but the name field of all these objects isn’t a required change, but adding information specific to the object helps with readability. On this form, only one entry field is required, the one time password, so the second password field (enabled by default) is set to none and the initial username field is changed to password. The Input field below is changed to reflect the type of logon to better queue the user. Finally, we’ll finish off with an Empty Action object where we’ll insert an expression to verify the OTP. The name is configured in properties and the expression in the branch rules, as shown in Figures 16 and 17. Again, you’ll want to click advanced on the branch rules to enter the expression. The expression used in the branch rules above is: expr { [mcget {session.user.otp.pw}] == [mcget -secure {session.logon.last.otp}] } Note again that the –secure option is only available in version 10.2.1 forward. Now that we’re done adding objects to the policy, one final step is to click on the Deny following the OK branch of the OTP Verify Empty Action object and change it from Deny to Allow. Figure 18 shows how it should look in the visual policy editor window. Now that the policy is completed, we can attach the access profile to the virtual server and test it out, as can be seen in Figures 19 and 20 below. Email Option If during testing you’d rather send emails than utilize the SMS Gateway, then configure your BIG-IP for mail support (Solution 3664), keep the Logging object, lose the HTTP Auth object, and configure the system with this script to listen for the messages sent to /var/log/ltm from the configured Logging object: #!/bin/bash while true do tail -n0 -f /var/log/ltm | while read line do var2=`echo $line | grep otp | awk -F'[,]' '{ print $2 }'` var3=`echo $line | grep otp | awk -F'[,]' '{ print $3 }'` var4=`echo $line | grep otp | awk -F'[,]' '{ print $4 }'` if [ "$var3" = "otp" -a -n "$var4" ]; then echo Sending pin $var4 to $var2 echo One Time Password is $var4 | mail -s $var4 $var2 fi done done The log messages look like this: Jan 26 13:37:24 local/bigip1 notice apd[4118]: 01490113:5: b94f603a: session.user.otp.log is mail,user1@home.local,otp,609819,mobile,12345678 The output from the script as configured looks like this: [root@bigip1:Active] config # ./otp_mail.sh Sending pin 239272 to user1@home.local Conclusion The BIG-IP APM is an incredibly powerful tool to add to the LTM toolbox. Whether using the mail system or an SMS gateway, you can take a bite out of your infrastructure complexity by using this solution to eliminate the need for a token management service. Many thanks again to F5er Per Boe for this excellent solution!6.4KViews0likes23Commentsv11: iRules Data Group Updates
Several months ago I wrote up the v10 formatting for internal and external datagroups: iRules Data Group Formatting Rules. In v11, however, there is a change to the format of the internal data group and the data group reference to external class files (the formatting in the external class file itself is unchanged). The formatting rules in v11 for data groups more closely resembles the tmsh commands necessary to build the class at the CLI (these command attributes are masked if you are using the GUI). I’ll follow the same format as the original write-up in showing the various data group types. The format is the same among internal data group types. If there is no value associated with the key, there is a curly bracket pair trailing the key on the same line. If there is an associated value with a key, the curly bracket opens the value, followed by a newline with the keyword data and the value, then another newline with the closing curly bracket. After the records are listed, the type is specified. For external data groups, the file name and the type are specified. If the filename is in /var/class, the path is omitted from the filename reference. Address Data Groups Internal Data Group ltm data-group internal addr_testclass { records { 192.168.1.1/32 { } 192.168.1.2/32 { data "host 2" } 192.168.2.0/24 { } 192.168.3.0/24 { data "network 2" } } type ip } External Data Group ltm data-group external addr_testclass_ext { external-file-name addr_testclass.class type ip } Integer Data Groups Internal Data Group ltm data-group internal int_testclass { records { 1 { data "test 1" } 2 { data "test 2" } } type integer } External Data Group ltm data-group external int_testclass_ext { external-file-name int_testclass type integer } String Data Groups Internal Data Group ltm data-group internal str_testclass { records { str1 { data "value 1" } str2 { data "value 2" } } type string } External Data Group ltm data-group external str_testclass_ext { external-file-name str_testclass.class type string } External Datagroup File Management Beginning in v11, external datagroups are imported into a local filestore rather than simply existing someplace on the file system (/config/filestore). However, this filestore is not meant to edited manually. Please follow the steps below for creating or modifying external datagroups. Thanks to hoolio for the steps in this external datagroup section. Create a New External Datagroup from the CLI Non-Interactively 1. Create or copy over to LTM a temporary file containing the external data group contents. If copying, make sure the line terminators are \n only not \r\n. # cat /var/tmp/string_name_value_external_dg.txt "name1" := "value1", "name2" := "value2", "name3" := "value3", 2. Create the new external data group file tmsh create /sys file data-group string_name_value_external_dg_file separator ":=" source-path file:/var/tmp/string_name_value_external_dg.txt type string 3. Create the external data group referencing the file tmsh create /ltm data-group external string_name_value_external_dg external-file-name string_name_value_external_dg_file Modify the External Datagroup File for Existing Datagroup 1. Create a new temporary file containing the updated external data group contents # cat /var/tmp/string_name_value_external_v2_dg.txt "name1" := "valueA", "name2" := "valueB", "name3" := "valueC", 2. Import the new data group file tmsh create /sys file data-group string_name_value_external_v2_dg_file separator ":=" source-path file:/var/tmp/string_name_value_external_v2_dg.txt type string 3. Modify the data group definition to reference the new external data group file tmsh modify /ltm data-group external string_name_value_external_dg external-file-name string_name_value_external_v2_dg_file 4. Delete the old data group file if it’s unneeded tmsh delete sys file data-group string_name_value_external_dg_file Handling Line Terminator Discrepencies Whether imporing external datagroups in the GUI or from the CLI, the system does not accept files with \r\n line terminators, it only accepts \n. If you copy files over from windows, most likely you have the wrong terminator in your file format. To check, you can use the od command. Datagroup Contents created in vi on LTM [root@golgotha:Active] data_group_d # od -c /var/tmp/string_name_value_external_dg.txt 0000000 " n a m e 1 " : = " v a l u 0000020 e 1 " , \n " n a m e 2 " : = 0000040 " v a l u e 2 " , \n " n a m e 3 0000060 " : = " v a l u e 3 " , \n 0000077 Datagroup Contents created in Notepad on Windows [root@golgotha:Active] data_group_d # od -c /var/tmp/notepad_dg.txt 0000000 " n a m e 1 " : = " v a l u 0000020 e 1 " , \r \n " n a m e 2 " : = 0000040 " v a l u e 2 " , \r \n " n a m 0000060 e 3 " : = " v a l u e 3 " , 0000100 If your line terminators are incorrect, you can use the tr command to remove the \r's. [root@golgotha:Active] tmp # cat /var/tmp/notepad_dg.txt | tr -d '\r' > /var/tmp/notepad_dg_update.txt [root@golgotha:Active] tmp # od -c /var/tmp/notepad_dg_update.txt 0000000 " n a m e 1 " : = " v a l u 0000020 e 1 " , \n " n a m e 2 " : = 0000040 " v a l u e 2 " , \n " n a m e 3 0000060 " : = " v a l u e 3 " , 00000765.7KViews0likes18CommentsiRules: Disabling Event Processing
The Problem One of our customers was recently trying to use LTM with iRules to replace their proxy servers. They wondered if, rather than building one big iRule that contained all the required logic, they could break out functional pieces into individual iRules. It's a fairly common request and does make a lot of sense in cases where one function (such as inserting a custom header in all HTTP requests) would be applied to all virtual servers whereas other functions may only be required on some virtual servers. What they were looking for was a command that allows an iRule to stop other iRules from running. That way they could create a set of rules which they could put in sequence either by defining them in the desired order on the virtual server resource list, or by setting event priority within the iRule itself. The solution they wanted was this: Insert the real client IP in a new HTTP header for all requests. If the URI matches a specific pattern, then rewrite the URI a specific way and choose a pool If the URI doesn't match, rewrite the URI a different way and choose a different pool They wanted to prevent #3 from happeining if #2 already had, so they started by using a global variable (uri_rewritten) to track if a decision had been made yet. However, they were fairly certain that this was not the best way to accomplish their goal. The Initial Solution Here are the iRules they started with, using the variable flag and event priority to control the execution: rule init_rewrites { when RULE_INIT { # Setup the global variable to track if a URL has already been re-written set ::uri_rewritten 0 } } rule insert_custom_client_ip { # This rule is generic and needed on all virtuals when HTTP_REQUEST priority 10 { log local0.alert "Insert Client IP" HTTP::header insert "X-Forwarded-For" [IP::client_addr] } } rule generic_static_content_handler { when HTTP_REQUEST { # Extract the file extension set extension [string range [HTTP::path] [string last "." [HTTP::path]] [string length [HTTP::path]]] # If the extension matches against the class then re-write with the appropriate directory if { [matchclass $extension equals $::static_content] } { set new_path [format "%s%s" "/common" [HTTP::path]] HTTP::path $new_path pool static_pool set ::uri_rewritten 1 } } } rule default_rewrite { when HTTP_REQUEST priority 1000 { # If the request hasn’t already been re-written by a previous rule then rewrite it with this default rule if { $::uri_rewritten equals 0 } { set new_path [format "%s%s" "/proxy" [HTTP::path]] HTTP::path $new_path pool test_http_pool } set ::uri_rewritten 0 } } What they really wanted to do, though, was to prevent the execution of the default_rewrite iRule entirely if the generic_static_content_handler iRule matched and re-wrote the URL already. (It's worth mentioning that a global variable would actually not work as intended here, as it would be shared by all connections, resulting in false positives for some connections processing in parallel. For a connection-specific flag, a local variable could be used in this manner.) But there is a better way. A Better Solution: The "event" command The event command is what they are looking for. It has a "disable" option that supports disabling specific events or all events for the remainder of that connection. If only selected events are disabled, they can be re-enabled from within another event using the coresponding "enable" option. With that in mind, the set of iRules above could be adjusted just slightly to allow the iRules engine to "bail out" of the ruleset if an early match is seen for a request. For starters, we no longer need the init_rewrites iRule, since the global variable it initializes for connection control is no longer needed. rule init_rewrites { when RULE_INIT { # Setup the global variable to track if a URL has already been re-written set ::uri_rewritten 0 } } The insert_custom_client_ip iRule is meant to apply to all connections, and it should run first, so we will leave it as is, including the priority 10, which will cause it to execute before any iRules with a higher priority (500 is the default priority). rule insert_custom_client_ip { # This rule is generic and needed on all virtuals when HTTP_REQUEST priority 10 { log local0.alert "Insert Client IP" HTTP::header insert "X-Forwarded-For" [IP::client_addr] } } The generic_static_content_handler iRule should disable the HTTP_REQUEST event for this connection instead of setting the variable flag, and can still run at default priority 500: rule generic_static_content_handler { when HTTP_REQUEST { # Extract the file extension set extension [string range [HTTP::path] [string last "." [HTTP::path]] [string length [HTTP::path]]] # If the extension matches against the class then re-write with the appropriate directory if { [matchclass $extension equals $::static_content] } { set new_path [format "%s%s" "/common" [HTTP::path]] HTTP::path $new_path pool static_pool # disable the current event only (HTTP_REQUEST) for this connection event disable } } } The default_rewrite iRule will still run at priority 1000 (which is highest possible event priority value, so it will run last) but can now be modified to remove the checking and setting of the flag variable, since it will now run only if a match was not found in the previous iRule: rule default_rewrite { when HTTP_REQUEST priority 1000 { # If the request hasn’t already been re-written by a previous rule then rewrite it with this default rule if { $::uri_rewritten equals 0 } { set new_path [format "%s%s" "/proxy" [HTTP::path]] HTTP::path $new_path pool test_http_pool } set ::uri_rewritten 0 } } Finally, we will need to add one more small iRule that runs only in the HTTP_RESPONSE event, re-enabling the HTTP_REQUEST event for the connection once the response is seen: rule enable_HTTP_REQUEST_on_response { when HTTP_RESPONSE { event enable HTTP_REQUEST } } This last addition allows the iRule to continue to process any additional HTTP requests seen later on the same Keep-Alive connection. It's also the reason I didn't use "event disable all" in the generic_static_content_handler iRule: Because we needed to re-enable iRule processing for follow-on requests after this request completes, and if all events are disabled, there would be no opportunity to do so. Get the Flash Player to see this player. 200806010-iRulesDisablingEvents.mp35.6KViews0likes12Comments