automation
317 TopicsControlling a Pool Members Ratio and Priority Group with iControl
A Little Background A question came in through the iControl forums about controlling a pool members ratio and priority programmatically. The issue really involves how the API’s use multi-dimensional arrays but I thought it would be a good opportunity to talk about ratio and priority groups for those that don’t understand how they work. In the first part of this article, I’ll talk a little about what pool members are and how their ratio and priorities apply to how traffic is assigned to them in a load balancing setup. The details in this article were based on BIG-IP version 11.1, but the concepts can apply to other previous versions as well. Load Balancing In it’s very basic form, a load balancing setup involves a virtual ip address (referred to as a VIP) that virtualized a set of backend servers. The idea is that if your application gets very popular, you don’t want to have to rely on a single server to handle the traffic. A VIP contains an object called a “pool” which is essentially a collection of servers that it can distribute traffic to. The method of distributing traffic is referred to as a “Load Balancing Method”. You may have heard the term “Round Robin” before. In this method, connections are passed one at a time from server to server. In most cases though, this is not the best method due to characteristics of the application you are serving. Here are a list of the available load balancing methods in BIG-IP version 11.1. Load Balancing Methods in BIG-IP version 11.1 Round Robin: Specifies that the system passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. This method works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory. Ratio (member): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine within the pool. Least Connections (member): Specifies that the system passes a new connection to the node that has the least number of current connections in the pool. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. Observed (member): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (member), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (member): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Ratio (node): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine across all pools of which the server is a member. Least Connections (node): Specifies that the system passes a new connection to the node that has the least number of current connections out of all pools of which a node is a member. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node, or the fastest node response time. Fastest (node): Specifies that the system passes a new connection based on the fastest response of all pools of which a server is a member. This method might be particularly useful in environments where nodes are distributed across different logical networks. Observed (node): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (node), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (node): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Dynamic Ratio (node) : This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. Fastest (application): Passes a new connection based on the fastest response of all currently active nodes in a pool. This method might be particularly useful in environments where nodes are distributed across different logical networks. Least Sessions: Specifies that the system passes a new connection to the node that has the least number of current sessions. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current sessions. Dynamic Ratio (member): This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. L3 Address: This method functions in the same way as the Least Connections methods. We are deprecating it, so you should not use it. Weighted Least Connections (member): Specifies that the system uses the value you specify in Connection Limit to establish a proportional algorithm for each pool member. The system bases the load balancing decision on that proportion and the number of current connections to that pool member. For example,member_a has 20 connections and its connection limit is 100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its connection limit is 200, so it is at 10% of capacity. In this case, the system select selects member_b. This algorithm requires all pool members to have a non-zero connection limit specified. Weighted Least Connections (node): Specifies that the system uses the value you specify in the node's Connection Limitand the number of current connections to a node to establish a proportional algorithm. This algorithm requires all nodes used by pool members to have a non-zero connection limit specified. Ratios The ratio is used by the ratio-related load balancing methods to load balance connections. The ratio specifies the ratio weight to assign to the pool member. Valid values range from 1 through 100. The default is 1, which means that each pool member has an equal ratio proportion. So, if you have server1 a with a ratio value of “10” and server2 with a ratio value of “1”, server1 will get served 10 connections for every one that server2 receives. This can be useful when you have different classes of servers with different performance capabilities. Priority Group The priority group is a number that groups pool members together. The default is 0, meaning that the member has no priority. To specify a priority, you must activate priority group usage when you create a new pool or when adding or removing pool members. When activated, the system load balances traffic according to the priority group number assigned to the pool member. The higher the number, the higher the priority, so a member with a priority of 3 has higher priority than a member with a priority of 1. The easiest way to think of priority groups is as if you are creating mini-pools of servers within a single pool. You put members A, B, and C in to priority group 5 and members D, E, and F in priority group 1. Members A, B, and C will be served traffic according to their ratios (assuming you have ratio loadbalancing configured). If all those servers have reached their thresholds, then traffic will be distributed to servers D, E, and F in priority group 1. he default setting for priority group activation is Disabled. Once you enable this setting, you can specify pool member priority when you create a new pool or on a pool member's properties screen. The system treats same-priority pool members as a group. To enable priority group activation in the admin GUI, select Less than from the list, and in the Available Member(s) box, type a number from 0 to 65535 that represents the minimum number of members that must be available in one priority group before the system directs traffic to members in a lower priority group. When a sufficient number of members become available in the higher priority group, the system again directs traffic to the higher priority group. Implementing in Code The two methods to retrieve the priority and ratio values are very similar. They both take two parameters: a list of pools to query, and a 2-D array of members (a list for each pool member passed in). long [] [] get_member_priority( in String [] pool_names, in Common__AddressPort [] [] members ); long [] [] get_member_ratio( in String [] pool_names, in Common__AddressPort [] [] members ); The following PowerShell function (utilizing the iControl PowerShell Library), takes as input a pool and a single member. It then make a call to query the ratio and priority for the specific member and writes it to the console. function Get-PoolMemberDetails() { param( $Pool = $null, $Member = $null ); $AddrPort = Parse-AddressPort $Member; $RatioAofA = (Get-F5.iControl).LocalLBPool.get_member_ratio( @($Pool), @( @($AddrPort) ) ); $PriorityAofA = (Get-F5.iControl).LocalLBPool.get_member_priority( @($Pool), @( @($AddrPort) ) ); $ratio = $RatioAofA[0][0]; $priority = $PriorityAofA[0][0]; "Pool '$Pool' member '$Member' ratio '$ratio' priority '$priority'"; } Setting the values with the set_member_priority and set_member_ratio methods take the same first two parameters as their associated get_* methods, but add a third parameter for the priorities and ratios for the pool members. set_member_priority( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] priorities ); set_member_ratio( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] ratios ); The following Powershell function takes as input the Pool and Member with optional values for the Ratio and Priority. If either of those are set, the function will call the appropriate iControl methods to set their values. function Set-PoolMemberDetails() { param( $Pool = $null, $Member = $null, $Ratio = $null, $Priority = $null ); $AddrPort = Parse-AddressPort $Member; if ( $null -ne $Ratio ) { (Get-F5.iControl).LocalLBPool.set_member_ratio( @($Pool), @( @($AddrPort) ), @($Ratio) ); } if ( $null -ne $Priority ) { (Get-F5.iControl).LocalLBPool.set_member_priority( @($Pool), @( @($AddrPort) ), @($Priority) ); } } In case you were wondering how to create the Common::AddressPort structure for the $AddrPort variables in the above examples, here’s a helper function I wrote to allocate the object and fill in it’s properties. function Parse-AddressPort() { param($Value); $tokens = $Value.Split(":"); $r = New-Object iControl.CommonAddressPort; $r.address = $tokens[0]; $r.port = $tokens[1]; $r; } Download The Source The full source for this example can be found in the iControl CodeShare under PowerShell PoolMember Ratio and Priority.28KViews0likes3CommentsFrom ASM to Advanced WAF: Advancing your Application Security
TL;DR: As of April 01, 2021, F5 has officially placed Application Security Manager (ASM) into End of Sale (EoS) status, signifying the eventual retirement of the product. (F5 Support Announcement - K72212499 ) Existing ASM,or BEST bundle customers, under a valid support contract running BIG-IP version 14.1 or greater can simply reactivate their licenses to instantly upgrade to Advanced WAF (AdvWAF) completely free of charge. Introduction Protecting your applications is becoming more challenging every day; applications are getting more complex, and attackers are getting more advanced. Over the years we have heard your feedback that managing a Web Application Firewall (WAF) can be cumbersome and you needed new solutions to protect against the latest generation of attacks. Advanced Web Application Firewall, or AdvWAF, is an enhanced version of the Application Security Manager (ASM) product that introduces new attack mitigation techniques and many quality-of-life features designed to reduce operational overhead. On April 01, 2021 – F5 started providing free upgrades for existing Application Security Manager customers to the Advanced WAF license. Keep on reading for: A brief history of ASM and AdvWAF How the AdvWAF license differs from ASM (ASM vs AdvWAF How to determine if your BIG-IPs are eligible for this free upgrade Performing the license upgrade How did we get here? For many years, ASM has been the gold standard Web Application Firewall (WAF) used by thousands of organizations to help secure their most mission-critical web applications from would-be attackers. F5 acquired the technology behind ASM in 2004 and subsequently ‘baked’ it into the BIG-IP product, immediately becoming the leading WAF product on the market. In 2018, after nearly 14 years of ASM development, F5 released the new, Advanced WAF license to address the latest threats. Since that release, both ASM and AdvWAF have coexisted, granting customers the flexibility to choose between the traditional or enhanced versions of the BIG-IP WAF product.As new features were released, they were almost always unique to AdvWAF, creating further divergence as time went on, and often sparking a few common questions (all of which we will inevitably answer in this very article) such as: Is ASM going away? What is the difference between ASM and AdvWAF? Will feature X come to ASM too? I need it! How do I upgrade from ASM to AdvWAF? Is the BEST bundle no longer really the BEST? To simplify things for our customers (and us too!), we decided to announce ASM as End of Sale (EoS), starting on April 01, 2021. This milestone, for those unfamiliar, means that the ASM product can no longer be purchased after April 01 of this year – it is in the first of 4 stages of product retirement. An important note is that no new features will be added to ASM going forward. So, what’s the difference? A common question we get often is “How do I migrate my policy from ASM to AdvWAF?” The good news is that the policies are functionally identical, running on BIG-IP, with the same web interface, and have the same learning engine and underlying behavior. In fact, our base policies can be shared across ASM, AdvWAF, and NGINX App Protect (NAP). The AdvWAF license simply unlocks additional features beyond what ASM has, that is it – all the core behaviors of the two products are identical otherwise. So, if an engineer is certified in ASM and has managed ASM security policies previously, they will be delighted to find that nothing has changed except for the addition of new features. This article does not aim to provide an exhaustive list of every feature difference between ASM and AdvWAF. Instead, below is a list of the most popular features introduced in the AdvWAF license that we hope you can take advantage of. At the end of the article, we provide more details on some of these features: Secure Guided Configurations Unlimited L7 Behavioral DoS DataSafe (Client-side encryption) OWASP Compliance Dashboard Threat Campaigns (includes Bot Signature updates) Additional ADC Functionality Micro-services protection Declarative WAF Automation I’m interested, what’s the catch? There is none! F5 is a security company first and foremost, with a mission to provide the technology necessary to secure our digital world. By providing important useability enhancements like Secure Guided Config and OWASP Compliance Dashboard for free to existing ASM customers, we aim to reduce the operational overhead associated with managing a WAF and help make applications safer than they were yesterday - it’s a win-win. If you currently own a STANDALONE, ADD-ON or BEST Bundle ASM product running version 14.1 or later with an active support contract, you are eligible to take advantage of this free upgrade. This upgrade does not apply to customers running ELA licensing or standalone ASM subscription licenses at this time. If you are running a BIG-IP Virtual Edition you must be running at least a V13 license. To perform the upgrade, all you need to do is simply REACTIVATE your license, THAT IS IT! There is no time limit to perform the license reactivation and this free upgrade offer does not expire. *Please keep in mind that re-activating your license does trigger a configuration load event which will cause a brief interruption in traffic processing; thus, it is always recommended to perform this in a maintenance window. Step 1: Step 2: Choose “Automatic” if your BIG-IP can communicate outbound to the Internet and talk to the F5 Licensing Server. Choose Manual if your BIG-IP cannot reach the F5 Licensing Server directly through the Internet. Click Next and the system will re-activate your license. After you’ve completed the license reactivation, the quickest way to know if you now have AdvWAF is by looking under the Security menu. If you see "Guided Configuration”, the license upgrade was completed successfully. You can also login to the console and look for the following feature flags in the /config/bigip.license file to confirm it was completed successfully by running: grep -e waf_gc -e mod_waf -e mod_datasafe bigip.license You should see the following flags set to enabled: Waf_gc: enabled Mod_waf: enabled Mod_datasafe: enabled *Please note that the GUI will still reference ASM in certain locations such as on the resource provisioning page; this is not an indication of any failure to upgrade to the AdvWAF license. *Under Resource Provisioning you should now see that FPS is licensed. This will need to be provisioned if you plan on utilizing the new AdvWAF DataSafe feature explained in more detail in the Appendix below. For customers with a large install base, you can perform license reactivation through the CLI. Please refer to the following article for instructions: https://support.f5.com/csp/article/K2595 Conclusion F5 Advanced WAF is an enhanced WAF license now available for free to all existing ASM customers running BIG-IP version 14.1 or greater, only requiring a simple license reactivation. The AdvWAF license will provide immediate value to your organization by delivering visibility into the OWASP Top 10 compliance of your applications, configuration wizards designed to build robust security policies quickly, enhanced automation capabilities, and more. If you are running ASM with BIG-IP version 14.1 or greater, what are you waiting for? (Please DO wait for your change window though 😊) Acknowledgments Thanks to Brad Scherer , John Marecki , Michael Everett , and Peter Scheffler for contributing to this article! Appendix: More details on select AdvWAF features Guided Configurations One of the most common requests we hear is, “can you make WAF easier?” If there was such a thing as an easy button for WAF configurations, Guided Configs are that button. Guided Configurations easily take you through complex configurations for various use-cases such as Web Apps, OWASP top 10, API Protection, DoS, and Bot Protection. L7DoS – Behavioral DoS Unlimited Behavioral DoS - (BaDoS) provides automatic protection against DoS attacks by analyzing traffic behavior using machine learning and data analysis. With ASM you were limited to applying this type of DoS profile to a maximum of 2 Virtual Servers. The AdvWAF license completely unlocks this capability, removing the 2 virtual server limitation from ASM. Working together with other BIG-IP DoS protections, Behavioral DoS examines traffic flowing between clients and application servers in data centers, and automatically establishes the baseline traffic/flow profiles for Layer 7 (HTTP) and Layers 3 and 4. DataSafe *FPS must be provisioned DataSafe is best explained as real-time L7 Data Encryption. Designed to protect websites from Trojan attacks by encrypting data at the application layer on the client side. Encryption is performed on the client-side using a public key generated by the BIG-IP system and provided uniquely per session. When the encrypted information is received by the BIG-IP system, it is decrypted using a private key that is kept on the server-side. Intended to protect, passwords, pins, PII, and PHI so that if any information is compromised via MITB or MITM it is useless to the attacker. DataSafe is included with the AdvWAF license, but the Fraud Protection Service (FPS) must be provisioned by going to System > Resource Provisioning: OWASP Compliance Dashboard Think your policy is air-tight? The OWASP Compliance Dashboard details the coverage of each security policy for the top 10 most critical web application security risks as well as the changes needed to meet OWASP compliance. Using the dashboard, you can quickly improve security risk coverage and perform security policy configuration changes. Threat Campaigns (includes Bot Signature updates) Threat campaigns allow you to do more with fewer resources. This feature is unlocked with the AdvWAF license, it, however, does require an additional paid subscription above and beyond that. This paid subscription does NOT come with the free AdvWAF license upgrade. F5’s Security Research Team (SRT) discovers attacks with honeypots – performs analysis and creates attack signatures you can use with your security policies. These signatures come with an extremely low false-positive rate, as they are strictly based on REAL attacks observed in the wild. The Threat Campaign subscription also adds bot signature updates as part of the solution. Additional ADC Functionality The AdvWAF license comes with all of the Application Delivery Controller (ADC) functionality required to both deliver and protect a web application. An ASM standalone license came with only a very limited subset of ADC functionality – a limit to the number of pool members, zero persistence profiles, and very few load balancing methods, just to name a few. This meant that you almost certainly required a Local Traffic Manager (LTM) license in addition to ASM, to successfully deliver an application. The AdvWAF license removes many of those limitations; Unlimited pool members, all HTTP/web pertinent persistence profiles, and most load balancing methods, for example.13KViews8likes8CommentsTwo-Factor Authentication With Google Authenticator And LDAP
Introduction Earlier this year Google released their time-based one-time password (TOTP) solution named Google Authenticator. A TOTP is a single-use code with a finite lifetime that can be calculated by two parties (client and server) using a shared secret and a synchronized clock (see RFC 4226 for additional information). In the case of Google Authenticator, the TOTP are generated using a software (soft) token on a mobile device. Google currently offers applications for the Apple iPhone, Android-based devices, and Blackberry handsets. A user authenticating with a Google Authenticator-enabled service will require the possession of this software token. In order for the token to be effective, it must not be able to be duplicated and the shared secret should be closely guarded. Google Authenticator’s soft token solution offer a number of advantages over other commercially available solutions. It is free to use (all applications are free to download), the TOTP algorithm is open source, well-known, and well-tested, and finally it does not require a dedicated server for processing tokens. While certain potential weakness in SHA-1 have been identified, none of them can be exploited within the 30-second timeframe of the TOTP’s usability. For all intents and purposes, SHA-1 is reasonably secure, well-tested, and purpose-appropriate for this application. The algorithm however is only as secure as the users and administrators are at protecting the shared secret used in token processing. Calculating The Google Authenticator TOTP The Google Authenticator TOTP is calculated by generating an HMAC-SHA1 token, which uses a 10-byte base32-encoded shared secret as a key and Unix time (epoch) divided into a 30 second interval as inputs. The resulting 80-byte token is converted to a 40-character hexadecimal string, the least significant (last) hex digit is then used to calculate a 0-15 offset. The offset is then used to read the next 8 hex digits from the offset. The resulting 8 hex digits are then AND’d with 0x7FFFFFFF (2,147,483,647), then the modulo of the resultant integer and 1,000,000 is calculated, which produces the correct code for that 30 seconds period. Base32 encoding and decoding were covered in my previous Tech Tip titled Base32 Encoding And Decoding With iRules . The Tech Tip details the process for decoding a user’s base32-encoded key to binary as well as converting a binary key to base32. The HMAC-SHA256 token calculation iRule was originally submitted by Nat to the Codeshare on DevCentral. The iRule was slightly modified to support the SHA-1 algorithm, but is otherwise taken directly from the pseudocode outlined in RFC 2104. These two pieces of code contribute the bulk of the processing of the Google Authenticator code. The rest is done with simple bitwise and arithmetic functions. Google Authenticator Two-Factor Authentication Process Installing Google Authenticator Two-Factor Authentication The installation of Google Authenticator two-factor authentication on your BIG-IP is divided into six sections: creating an LDAP authentication configuration, configuring an LDAP (Active Directory) authentication profile, testing your authentication profile, adding the Google Authenticator iRule and “user_to_google_auth” mapping data group, attaching iRule to the authentication profile, and finally generating soft tokens for your users. The process is broken out into steps as trying to complete all the sections in tandem can be difficult to troubleshoot. Creating An LDAP (Active Directory) Authentication Configuration The LDAP profile we will configure will be extremely basic: no SSL, no Active Directory, etc. A detailed walkthrough for more advanced deployments can be found in our best practices guide: Configuring LDAP remote authentication for Active Directory . 1. Login to your BIG-IP using administrator credentials 2. Navigate to Local Traffic > Profiles > Authentication > Configurations 3. Click “Create” in the upper right-hand corner 4. Select “LDAP” from the “Type” drop-down menu 5. Now fill in the fields with your environment-specific values: Name: ldap.f5test.local Type: LDAP Remote LDAP Tree: dc=f5test, dc=local Host(s): <IP address(es) of LDAP server(s)> Service Port: 389 (default) LDAP Version: 3 (default) Bind DN: cn=ldap_bind_acct, dc=f5test, dc=local (if your LDAP server allows anonymous binds you may not need this option) Bind Password: <admin password> Confirm Bind Password: <admin password> 6. Click “Finished” to save the configuration Configuring An LDAP (Active Directory) Authentication Profile 1. Navigate to Local Traffic > Profiles > Authentication > Profiles 2. Click “Create” in the upper right-hand corner 3. Select “LDAP” from the “Type” drop-down menu 4. Fill in fields with appropriate values: Name: ldap.f5test.local Type: LDAP Configuation: ldap.f5test.local (select previously named configuration from drop-down) Rule: (leave this unchecked and not enabled for now, but this is where we will enable the Google Authenticator iRule shortly) 5. Click “Finished” Test Your Authentication Profile 1. Create a basic HTTP virtual server with your LDAP authentication profile enabled on the virtual 2. Access your virtual from a web browser and you should be prompted with an HTTP Basic Authentication credential form 3. Test with known-working credentials, if everything works you’re good to go, if not you’ll need to troubleshoot the authentication issue Adding the Google Authenticator iRule 1. Go to the DevCentral Codeshare and download the Google Authenticator iRule 2. Navigate to Local Traffic > iRules > iRule List 3. Click “Create” in the upper right-hand corner 4. Name your iRule “google_authenticator_plus_ldap_two_factor” and paste the iRule into “Definition” section 5. Click “Finished” when you’re done Attaching The Google Authenticator iRule To Your Authentication Profile 1. Go back to the “Authentication Profile” section by browsing to Local Traffic > Profiles > Authentication > Profiles 2. Select your LDAP profile from the list 3. Now attach select the “google_authenticator_plus_ldap_two_factor” iRule from the “Rule” drop-down 4. Click “Finished” Generating Software Tokens For Users In addition to the Google Authenticator iRule we also wrote a Google Authenticator Soft Token Generator iRule that will generate soft tokens for your users. The iRule can be added directly to an HTTP virtual server without a a pool and accessed directly to create tokens. There are a few available fields in the generator: account, pre-defined secret, and a QR code option. The “account” field defines how to label the soft token within the user’s mobile device and can be useful if the user has multiple soft token on the same device (I have 3 and need to label them to keep them straight). A 10-byte string can be used as a pre-defined secret for conversion to a base32-encoded key. We will advise you against using a pre-defined key because a key known to the user is something they know (as opposed to something they have) and could be potentially regenerate out-of-band thereby nullifying the benefits of two-factor authentication. Lastly, there is an option to generate a QR code by sending an HTTPS request to Google and returning the QR code as an image. While this is convenient, this could be seen as insecure since it may wind up in Google’s logs somewhere. You’ll have to decide if that is a risk you’re willing to take for the convenience it provides. Once the token has been generated, it will need to be added to a data group on the BIG-IP: 1. Navigate to Local Traffic > iRules > Data Group Lists 2. Select “Create” from the upper right-hand corner if the data group does not yet exist. If it exists, just select it from the list. 3. Name the data group “user_to_google_auth” (data group name can be changed in the RULE_INIT section of the Google Authenticator iRule) 4. The type of data group will be “string” 5. Type the “username” into the “string” field and paste the “Google Authenticator key” into the “value” field 6. Click “Add” and you the username/key pair should appear in the list as such: user := ONSWG4TFOQYTEMZU 7. Click “Finished” when all your username/key pairs have been added. Your user can scan the QR code or type it into their device manually. After they scan the QR code, the account name should appear along with the TOTP for the account. The image below is how the soft token appears in the Google Authenticator iPhone application: Once again, do not let the user leave with a copy of the plain text key. Knowing their key value will negate the value of having the token in the first place. Once the key has been added to the BIG-IP, the user’s device, and they’ve tested their access, destroy any reference to the key outside the BIG-IPs data group.If you’re worried about having the keys in plain text on the BIG-IP, they can be encrypted with AES or stored off-box in LDAP and only queried via secure connection. This is beyond the scope of this article, but doable with iRules. Testing and Troubleshooting There are a lot of moving pieces in this iRule so troubleshooting can be a bit daunting at first glance, but because all of the pieces can be separated into their constituents the problem is usually identified quickly. There are five pieces that make up this solution: the LDAP service, the BIG-IP LDAP profile, the Google Authenticator iRule, the “user_to_google_auth” mapping data group, and finally the soft token. Try to separate them from each other to expedite the troubleshooting process. Here are a few helpful hints in troubleshooting potential issues: 1. Are all the clocks synchronized? The BIG-IP and LDAP server can be tested from the command line by running ‘ntpdate –q pool.ntp.org’. If the clocks are more than a few milliseconds off, they’ll need to be adjusted. An NTP server should be configured for all devices. Likewise the user’s mobile device must be configured to use network time or else the calculated value will always be wrong. Remember that timezones do not matter when using Unix time. 2. Is basic LDAP working without the iRule attached? Before ever touching any of the Google Authenticator related iRules, data groups, devices, etc. your LDAP configuration should be in working order. If you’re having problems finding the issue, enable “debug logging” at the bottom of the LDAP authentication configuration page on your BIG-IP and tail the logs on your LDAP server. Revisit the best practices guide if you are still unsure about any configuration options. 3. Turn on (or increase) logging for Google Authenticator iRule. In the RULE_INIT section of the Google Authenticator iRule, there is a debug logging option. Set it to ‘2’ and all actions from the iRule will be logged to /var/log/ltm. If you see one particular area that is consistently hanging, investigate it further. Conclusion With every passing day system security becomes a greater concern. Today’s attacks are far more sophisticated and costly than those of days past. With all the stories of stolen laptops and other devices in the field, it is a little easier to sleep as a systems administrator knowing that a tech-aware thief has one more hurdle to surpass in an effort to compromise your infrastructure. The implementation costs of deploying two-factor authentication with Google Authenticator in an existing F5 infrastructure are very low assuming your employees have company-issued mobile devices. The cost can be deduced to the man hours required to install this iRule and generate tokens for your users. The cost is almost certainly less than that of a single incident of a compromise account. Until next time, batten down the hatches and get that two-factor project underway that’s been on the backburner for two years. Code and References Google Authenticator iRule – Documentation and code for the iRule used in this Tech Tip Google Authenticator Soft Token Generator iRule – iRule for generating soft tokens for users RFC 4226 - HOTP: An HMAC-Based One-Time Password Algorithm RFC 2104 - HMAC: Keyed-Hashing for Message Authentication RFC 4648 - The Base16, Base32, and Base64 Data Encodings SOL11072 - Configuring LDAP remote authentication for Active Directory7.2KViews1like12CommentsOne Time Passwords via an SMS Gateway with BIG-IP Access Policy Manager
One time passwords, or OTP, are used (as the name indicates) for a single session or transaction. The plus side is a more secure deployment, the downside is two-fold—first, most solutions involve a token system, which is costly in management, dollars, and complexity, and second, people are lousy at remembering things, so a delivery system for that OTP is necessary. The exercise in this tech tip is to employ BIG-IP APM to generate the OTP and pass it to the user via an SMS Gateway, eliminating the need for a token creating server/security appliance while reducing cost and complexity. Getting Started This guide was developed by F5er Per Boe utilizing the newly released BIG-IP version 10.2.1. The “-secure” option for the mcget command is new in this version and is required in one of the steps for this solution. Also, this solution uses the Clickatell SMS Gateway to deliver the OTPs. Their API is documented at http://www.clickatell.com/downloads/http/Clickatell_HTTP.pdf. Other gateway providers with a web-based API could easily be substituted. Also, there are steps at the tail end of this guide to utilize the BIG-IP’s built-in mail capabilities to email the OTP during testing in lieu of SMS. The process in delivering the OTP is shown in Figure 1. First a request is made to the BIG-IP APM. The policy is configured to authenticate the user’s phone number in Active Directory, and if successful, generate a OTP and pass along to the SMS via the HTTP API. The user will then use the OTP to enter into the form updated by APM before allowing the user through to the server resources. BIG-IP APM Configuration Before configuring the policy, an access profile needs to be created, as do a couple authentication servers. First, let’s look at the authentication servers Authentication Servers To create servers used by BIG-IP APM, navigate to Access Policy->AAA Servers and then click create. This profile is simple, supply your domain server, domain name, and admin username and password as shown in Figure 2. The other authentication server is for the SMS Gateway, and since it is an HTTP API we’re using, we need the HTTP type server as shown in Figure 3. Note that the hidden form values highlighted in red will come from your Clickatell account information. Also note that the form method is GET, the form action references the Clickatell API interface, and that the match type is set to look for a specific string. The Clickatell SMS Gateway expects the following format: https://api.clickatell.com/http/sendmsg?api_id=xxxx&user=xxxx&password=xxxx&to=xxxx&text=xxxx Finally, successful logon detection value highlighted in red at the bottom of Figure 3 should be modified to response code returned from SMS Gateway. Now that the authentication servers are configured, let’s take a look at the access profile and create the policy. Access Profile & Policy Before we can create the policy, we need an access profile, shown below in Figure 4 with all default settings. Now that that is done, we click on Edit under the Access Policy column highlighted in red in Figure 5. The default policy is bare bones, or as some call it, empty. We’ll work our way through the objects, taking screen captures as we go and making notes as necessary. To add an object, just click the “+” sign after the Start flag. The first object we’ll add is a Logon Page as shown in Figure 6. No modifications are necessary here, so you can just click save. Next, we’ll configure the Active Directory authentication, so we’ll add an AD Auth object. Only setting here in Figure 7 is selecting the server we created earlier. Following the AD Auth object, we need to add an AD Query object on the AD Auth successful branch as shown in Figures 8 and 9. The server is selected in the properties tab, and then we create an expression in the branch rules tab. To create the expression, click change, and then select the Advanced tab. The expression used in this AD Query branch rule: expr { [mcget {session.ad.last.attr.mobile}] != "" } Next we add an iRule Event object to the AD Query OK branch that will generate the one time password and provide logging. Figure 10 Shows the iRule Event object configuration. The iRule referenced by this event is below. The logging is there for troubleshooting purposes, and should probably be disabled in production. 1: when ACCESS_POLICY_AGENT_EVENT { 2: expr srand([clock clicks]) 3: set otp [string range [format "%08d" [expr int(rand() * 1e9)]] 1 6 ] 4: set mail [ACCESS::session data get "session.ad.last.attr.mail"] 5: set mobile [ACCESS::session data get "session.ad.last.attr.mobile"] 6: set logstring mail,$mail,otp,$otp,mobile,$mobile 7: ACCESS::session data set session.user.otp.pw $otp 8: ACCESS::session data set session.user.otp.mobile $mobile 9: ACCESS::session data set session.user.otp.username [ACCESS::session data get "session.logon.last.username"] 10: log local0.alert "Event [ACCESS::policy agent_id] Log $logstring" 11: } 12: 13: when ACCESS_POLICY_COMPLETED { 14: log local0.alert "Result: [ACCESS::policy result]" 15: } On the fallback path of the iRule Event object, add a Variable Assign object as show in Figure 10b. Note that the first assignment should be set to secure as indicated in image with the [S]. The expressions in Figure 10b are: session.logon.last.password = expr { [mcget {session.user.otp.pw}]} session.logon.last.username = expr { [mcget {session.user.otp.mobile}]} On the fallback path of the AD Query object, add a Message Box object as shown in Figure 11 to alert the user if no mobile number is configured in Active Directory. On the fallback path of the Event OTP object, we need to add the HTTP Auth object. This is where the SMS Gateway we configured in the authentication server is referenced. It is shown in Figure 12. On the fallback path of the HTTP Auth object, we need to add a Message Box as shown in Figure 13 to communicate the error to the client. On the Successful branch of the HTTP Auth object, we need to add a Variable Assign object to store the username. A simple expression and a unique name for this variable object is all that is changed. This is shown in Figure 14. On the fallback branch of the Username Variable Assign object, we’ll configure the OTP Logon page, which requires a Logon Page object (shown in Figure 15). I haven’t mentioned it yet, but the name field of all these objects isn’t a required change, but adding information specific to the object helps with readability. On this form, only one entry field is required, the one time password, so the second password field (enabled by default) is set to none and the initial username field is changed to password. The Input field below is changed to reflect the type of logon to better queue the user. Finally, we’ll finish off with an Empty Action object where we’ll insert an expression to verify the OTP. The name is configured in properties and the expression in the branch rules, as shown in Figures 16 and 17. Again, you’ll want to click advanced on the branch rules to enter the expression. The expression used in the branch rules above is: expr { [mcget {session.user.otp.pw}] == [mcget -secure {session.logon.last.otp}] } Note again that the –secure option is only available in version 10.2.1 forward. Now that we’re done adding objects to the policy, one final step is to click on the Deny following the OK branch of the OTP Verify Empty Action object and change it from Deny to Allow. Figure 18 shows how it should look in the visual policy editor window. Now that the policy is completed, we can attach the access profile to the virtual server and test it out, as can be seen in Figures 19 and 20 below. Email Option If during testing you’d rather send emails than utilize the SMS Gateway, then configure your BIG-IP for mail support (Solution 3664), keep the Logging object, lose the HTTP Auth object, and configure the system with this script to listen for the messages sent to /var/log/ltm from the configured Logging object: #!/bin/bash while true do tail -n0 -f /var/log/ltm | while read line do var2=`echo $line | grep otp | awk -F'[,]' '{ print $2 }'` var3=`echo $line | grep otp | awk -F'[,]' '{ print $3 }'` var4=`echo $line | grep otp | awk -F'[,]' '{ print $4 }'` if [ "$var3" = "otp" -a -n "$var4" ]; then echo Sending pin $var4 to $var2 echo One Time Password is $var4 | mail -s $var4 $var2 fi done done The log messages look like this: Jan 26 13:37:24 local/bigip1 notice apd[4118]: 01490113:5: b94f603a: session.user.otp.log is mail,user1@home.local,otp,609819,mobile,12345678 The output from the script as configured looks like this: [root@bigip1:Active] config # ./otp_mail.sh Sending pin 239272 to user1@home.local Conclusion The BIG-IP APM is an incredibly powerful tool to add to the LTM toolbox. Whether using the mail system or an SMS gateway, you can take a bite out of your infrastructure complexity by using this solution to eliminate the need for a token management service. Many thanks again to F5er Per Boe for this excellent solution!6.4KViews0likes23CommentsF5 Automation with Ansible Tips and Tricks
Getting Started with Ansible and F5 In this article we are going to provide you with a simple set of videos that demonstrate step by step how to implement automation with Ansible. In the last video, however we will demonstrate how telemetry and automation may be used in combination to address potential performance bottlenecks and ensure application availability. To start, we will provide you with details on how to get started with Ansible automation using the Ansible Automation Platform®: Backing up your F5 device Once a user has installed and configured Ansible Automation Platform, we will now transition to a basic maintenance function – an automated backup of a BIG-IP hardware device or Virtual Edition (VE). This is always recommended before major changes are made to our BIG-IP devices Configuring a Virtual Server Next, we will use Ansible to configure a Virtual Server, a task that is most frequently performed via manual functions via the BIG-IP. When changes to a BIG-IP are infrequent, manual intervention may not be so cumbersome. However large enterprise customers may need to perform these tasks hundreds of times: Replace an SSL Certificate The next video will demonstrate how to use Ansible to replace an SSL certificate on a BIG-IP. It is important to note that this video will show the certificate being applied on a BIG-IP and then validated by browsing to the application website: Configure and Deploy an iRule The next administrative function will demonstrate how to configure and push an iRule using the Ansible Automation Platform® onto a BIG-IP device. Again this is a standard administrative task that can be simply automated via Ansible: Delete the Existing Virtual Server Ok so now we have to delete the above configuration to roll back to a steady state. This is a common administrative task when an application is retired. We again demonstrate how Ansible automation may be used to perform these simple administrative tasks: Telemetry and Automation: Using Threshold Triggers to Automate Tasks and Fix Performance Bottlenecks Now you have a clear demonstration as to how to utilize Ansible automation to perform routine tasks on a BIG-IP platform. Once you have become proficient with more routine Ansible tasks, we can explore more high-level, sophisticated automation tasks. In the below demonstration we show how BIG-IP administrators using SSL Orchestrator® (SSLO) can combine telemetry with automation to address performance bottlenecks in an application environment: Resources: So that is a short series of tutorials on how to perform routine tasks using automation plus a preview of a more sophisticated use of automation based upon telemetry and automatic thresholds. For more detail on our partnership, please visit our F5/Ansible page or visit the Red Hat Automation Hub for information on the F5 Ansible certified collections. https://www.f5.com/ansible https://www.ansible.com/products/automation-hub https://galaxy.ansible.com/f5networks/f5_modules5.7KViews2likes1CommentLTM: Configuring IP Forwarding
A basic change in internal routing architecture and functionality between BIG-IP 4.x and LTM 9.x has caused some confusion for customers whose v4.x deployment depended on IP forwarding. Here is an explanation of the change, and the new configuration requirements to support forwarding of IP traffic using LTM. What changed? Both BIG-IP and LTM are default deny devices, which means a specific configuration is required to support every desired traffic flow. In BIG-IP, packets not matching a virtual server or SNAT/NAT would be dropped, unless the BIG-IP v4.x global IP forwarding checkbox feature was enabled. With IP forwarding enabled, packets not matching a virtual or SNAT/NAT would be forwarded intact per the routing table entries. LTM also requires that all traffic must match a defined TMM listener (a virtual server, SNAT or NAT) or be dropped. However, LTM's full application proxy architecture separates routing intelligence from load balancing, and the deprecated IP forwarding feature was intentionally not included in LTM to optimize load balancing performance. The IP forwarding checkbox feature was deprecated early in the BIG-IP 4.x tree. Although F5 has long recommended that IP forwarding be replaced with forwarding virtual servers, forwarding pools, SNATs or NATs, some customers retained their IP forwarding configuration when upgrading to LTM v9.x. Since those various configuration options exist to support traffic previously managed by IP forwarding, the One-Time Conversion Utility (OTCU) that translates v4 configurations to v9 syntax does not presume to configure global forwarding virtual servers in place of global IP forwarding. For those customers and other administrators already familiar with BIG-IP but now using LTM, it isn't obvious how to replicate the forwarding behaviour they require. Configuring forwarding for LTM The recommended replacement for global IP forwarding is a forwarding virtual server configured to listen for all IP protocols, all addresses and all ports on all VLANs. This virtual server would catch all traffic not matching another listener and forward in accordance with LTM's routing table entries. You can configure a wildcard forwarding virtual server that listens for all IP protocols, all addresses and all ports on all VLANs. 1. In the LTM GUI, browse to Virtual Servers & click "Create". 2. Configure the following properties: Destination: Network Address=0.0.0.0 Mask=0.0.0.0 Service port: 0 Type: Forwarding (IP) Protocol: *All Protocols VLAN Traffic: All VLANs 3. Click "Finish" to create the virtual server. The resulting configuration snip looks like this: virtual forward_vs { ip forward destination any:any mask none } This will forward all IP traffic as long as there is a matching route in the routing table. (Packets bound for destinations for which there is no route will be dropped with no ICMP notification.) Commonly required modifications You can limit forwarding to only traffic bound for specific subnets by specifying the appropriate subnet and mask. If a different router exists on any directly connected network, you may need to create a custom fastL4 profile with "Loose Initiation" & "Loose Close" enabled to prevent LTM from interfering with forwarded conversations traversing an asymmetrical path. If the forwarding virtual server is intended to allow outbound access for your privately addresses servers, you will need to configure a SNAT to translate the source address of that traffic to a publicly routable address. If you have multiple gateways, you can load balance requests between the routers. To do so, first create a gateway pool containing the routers as members. Then configure the virtual server as above, but selecting Type "Performance (Layer 4)" instead of "Forwarding (IP)", and applying the gateway pool as its resource. Related information SOL7229: Methods of gaining administrative access to nodes through the BIG-IP system If you only need to forward administrative traffic to your servers, and no other forwarding is required, there are several additional options for that detailed in this solution. SOL473: Advantages and disadvantages of using IP forwarding This is an old solution that summarizes the pros and cons of BIG-IP 4.x IP forwarding. I only suggest reading it now to highlight the fact that LTM's approach retains the advantages and overcomes the disadvantages mentioned therein. Get the Flash Player to see this player.5.3KViews0likes5CommentsThe Full-Proxy Data Center Architecture
Why a full-proxy architecture is important to both infrastructure and data centers. In the early days of load balancing and application delivery there was a lot of confusion about proxy-based architectures and in particular the definition of a full-proxy architecture. Understanding what a full-proxy is will be increasingly important as we continue to re-architect the data center to support a more mobile, virtualized infrastructure in the quest to realize IT as a Service. THE FULL-PROXY PLATFORM The reason there is a distinction made between “proxy” and “full-proxy” stems from the handling of connections as they flow through the device. All proxies sit between two entities – in the Internet age almost always “client” and “server” – and mediate connections. While all full-proxies are proxies, the converse is not true. Not all proxies are full-proxies and it is this distinction that needs to be made when making decisions that will impact the data center architecture. A full-proxy maintains two separate session tables – one on the client-side, one on the server-side. There is effectively an “air gap” isolation layer between the two internal to the proxy, one that enables focused profiles to be applied specifically to address issues peculiar to each “side” of the proxy. Clients often experience higher latency because of lower bandwidth connections while the servers are generally low latency because they’re connected via a high-speed LAN. The optimizations and acceleration techniques used on the client side are far different than those on the LAN side because the issues that give rise to performance and availability challenges are vastly different. A full-proxy, with separate connection handling on either side of the “air gap”, can address these challenges. A proxy, which may be a full-proxy but more often than not simply uses a buffer-and-stitch methodology to perform connection management, cannot optimally do so. A typical proxy buffers a connection, often through the TCP handshake process and potentially into the first few packets of application data, but then “stitches” a connection to a given server on the back-end using either layer 4 or layer 7 data, perhaps both. The connection is a single flow from end-to-end and must choose which characteristics of the connection to focus on – client or server – because it cannot simultaneously optimize for both. The second advantage of a full-proxy is its ability to perform more tasks on the data being exchanged over the connection as it is flowing through the component. Because specific action must be taken to “match up” the connection as its flowing through the full-proxy, the component can inspect, manipulate, and otherwise modify the data before sending it on its way on the server-side. This is what enables termination of SSL, enforcement of security policies, and performance-related services to be applied on a per-client, per-application basis. This capability translates to broader usage in data center architecture by enabling the implementation of an application delivery tier in which operational risk can be addressed through the enforcement of various policies. In effect, we’re created a full-proxy data center architecture in which the application delivery tier as a whole serves as the “full proxy” that mediates between the clients and the applications. THE FULL-PROXY DATA CENTER ARCHITECTURE A full-proxy data center architecture installs a digital "air gap” between the client and applications by serving as the aggregation (and conversely disaggregation) point for services. Because all communication is funneled through virtualized applications and services at the application delivery tier, it serves as a strategic point of control at which delivery policies addressing operational risk (performance, availability, security) can be enforced. A full-proxy data center architecture further has the advantage of isolating end-users from the volatility inherent in highly virtualized and dynamic environments such as cloud computing . It enables solutions such as those used to overcome limitations with virtualization technology, such as those encountered with pod-architectural constraints in VMware View deployments. Traditional access management technologies, for example, are tightly coupled to host names and IP addresses. In a highly virtualized or cloud computing environment, this constraint may spell disaster for either performance or ability to function, or both. By implementing access management in the application delivery tier – on a full-proxy device – volatility is managed through virtualization of the resources, allowing the application delivery controller to worry about details such as IP address and VLAN segments, freeing the access management solution to concern itself with determining whether this user on this device from that location is allowed to access a given resource. Basically, we’re taking the concept of a full-proxy and expanded it outward to the architecture. Inserting an “application delivery tier” allows for an agile, flexible architecture more supportive of the rapid changes today’s IT organizations must deal with. Such a tier also provides an effective means to combat modern attacks. Because of its ability to isolate applications, services, and even infrastructure resources, an application delivery tier improves an organizations’ capability to withstand the onslaught of a concerted DDoS attack. The magnitude of difference between the connection capacity of an application delivery controller and most infrastructure (and all servers) gives the entire architecture a higher resiliency in the face of overwhelming connections. This ensures better availability and, when coupled with virtual infrastructure that can scale on-demand when necessary, can also maintain performance levels required by business concerns. A full-proxy data center architecture is an invaluable asset to IT organizations in meeting the challenges of volatility both inside and outside the data center. Related blogs & articles: The Concise Guide to Proxies At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility F5 Friday: When Firewalls Fail… F5 Friday: Platform versus Product4.3KViews1like1CommentHTTP Basic Access Authentication iRule Style
I started working on an administrator control panel for my previous Small URL Generator Tech Tip (part 1 and part 2) and realized that we probably didn’t want to make our Small URL statistics and controls viewable by everyone. That led me to make a decision as to how to secure this information. Why not the old, but venerable HTTP basic access authentication? It’s simple, albeit somewhat insecure, but it works well and is easy to deploy. First, a little background on how this mechanism works: First, a user places a GET request for a page without providing authentication. The server then responds with a “401 Authorization Required” status code and the authentication type and realm specified by the WWW-Authenticate header. In our case we will be using basic access authentication and we’ve arbitrarily named our realm “Secured Area". The user will then provide a username and password. The client will then join the username and password with a colon and apply base-64 encoding to the concatenated string. The client then presents this to the server via the Authorization header. If the credentials are verified by the server, the client will then be granted access to the “Secured Area”. Authentication Required for “Secured Area” This transaction normally takes place between the client and application server, but we can emulate this functionality using an iRule. The mechanism is rather simple and easy to implement. We need to look for the client Authorization header and if present, verify the credentials. If the credentials are valid, grant access to the virtual server’s content, if not, display the authentication box and then repeat the process. On the BIG-IP side, we are storing the username and MD5-digested password in a data group (which we aptly named authorized_users). While this is not as secure as a salted hash, it does provide some security beyond storing the credentials in plain text on our BIG-IP. Once we took these elements into consideration, this is the iRule we developed: 1: when HTTP_REQUEST { 2: binary scan [md5 [HTTP::password]] H* password 3: 4: if { [class lookup [HTTP::username] $::authorized_users] equals $password } { 5: log local0. "User [HTTP::username] has been authorized to access virtual server [virtual name]" 6: 7: # Insert iRule-based application code here if necessary 8: } else { 9: if { [string length [HTTP::password]] != 0 } { 10: log local0. "User [HTTP::username] has been denied access to virtual server [virtual name]" 11: } 12: 13: HTTP::respond 401 WWW-Authenticate "Basic realm=\"Secured Area\"" 14: } 15: } There are a couple different ways this iRule could be implemented. It can be applied as-is directly to any HTTP virtual and begin protecting the virtual’s contents. It can also be used to secure an iRule-based application. In which case the application code would need to be encapsulated where the comment is located in the above code. I will be publishing a more functional example of this in the near future, but you can start using it now if you have such a necessity. Earlier we discussed the inherent insecurity of basic access authentication due to the username and password being transmitted in plain text. To combat this, you’ll probably want to conduct this transaction over an SSL secured connection. Additionally you’ll want to generate the passwords as MD5 hashes before adding them to your data group. Here are a few ways to accomplish this: Bash % echo -n "password" | md5sum => 5f4dcc3b5aa765d61d8327deb882cf99 - Perl % perl -e "use Digest::MD5 'md5_hex'; print md5_hex("password") => 5f4dcc3b5aa765d61d8327deb882cf99 Ruby (using irb) irb(main):001:0> require "md5" => true irb(main):002:0> MD5::md5("password") => #<MD5: 5f4dcc3b5aa765d61d8327deb882cf99> While HTTP basic access authentication may not be the best authentication method for every case, it definitely has its advantages. It is easy to deploy (and even easier via an iRule), provides basic authentication without having to configure or depend on an external authentication service, and is supported by any browser developed in the last 15 years. Through the use of different data groups you can easily separate access to different virtuals or provide a simple SSO (Single Sign-On) solution for a number of virtual servers. We hope you find this tidbit of code to be useful in your environment. Stay tuned for more extensive examples and usages of this tech tip in the near future!4.3KViews0likes14CommentsScheduling BIG-IP Configuration Backups via the GUI with an iApp
Beginning with BIG-IP version 11, the idea of templates has not only changed in amazing and powerful ways, it has been extended to be far more than just templates. The replacement for templates is called iApp TM . But to call the iApp TM just a template would be woefully inaccurate and narrow. It does templates well, and takes the concept further by allowing you to re-enter a templated application and make changes. Previously, deploying an application via a template was sort of like the Ron Popeil rotisserie: “Set it, and forget it!” Once it was executed, the template process was over, it was up to you to track and potentially clean up all those objects. Now, the application service you create based on an iApp TM template effectively “owns” all the objects it created, so any change to the deployment adds/changes/deletes objects as necessary. The other exciting change from the template perspective is the idea of strictness. Once an application service is configured, any object created that is owned by that service cannot be changed outside of the service itself. This means that if you want to add a pool member, it must be done within the application service, not within the pool. You can turn this off, but what a powerful protection of your services! Update for v11.2 - https://devcentral.f5.com/s/Tutorials/TechTips/tabid/63/articleType/ArticleView/articleId/1090565/Archiving-BIG-IP-Configurations-with-an-iApp-in-v112.aspx The Problem I received a request from one of our MVPs that he’d really like to be able to allow his users to schedule configuration backups without dropping to the command line. Knowing that the iApp TM feature was releasing soon with version 11, I started to see how I might be able to coax a command line configuration from the GUI. In training, I was told that “anything you can do in tmsh, you can do with an iApp TM .” This is excellent, and the basis for why I think they are going to be incredibly popular for not only controlling and managing applications, but also for extending CLI functions to the GUI. Anyway, so in order to schedule a configuration backup, I need: A backup script A cron job to call said script That’s really all there is to it. The Solution Thankfully, the background work is already done courtesy of a config backup codeshare entry by community user Colin Stubbs in the Advanced Design & Config Wiki. I did have to update the following bigpipe lines from the script: bigpipe export oneline “${SCF_FILE}” to tmsh save /sys config one-line file “${SCF_FILE}” bigpipe export “${SCF_FILE}” to tmsh save /sys config file "${SCF_FILE}" bigpipe config save “${UCS_FILE}” passphrase “${UCS_PASSPHRASE}” to tmsh save /sys ucs "${UCS_FILE}" passphrase "${UCS_PASSPHRASE}" bigpipe config save “${UCS_FILE}” to tmsh save /sys ucs "${UCS_FILE}" Also, I created (according the script comments from the codeshare entry) a /var/local/bin directory to place the script and a /var/local/backups directory for the script to dump the backup files in. These are optional and can be changed as necessary in your deployment, you’ll just need to update the script to reflect your file system preferences. Now that I have everything I need to support a backup, I can move on to the iApp TM template configuration. iApp TM Components A template consists of three parts: implementation, presentation, and help. You can create an empty template, or just start with presentation or help if you like. The implementation is tmsh script language, based on the Tcl language so loved by all of us iRulers. Please reference the tmsh wiki for the available tmsh extensions to the Tcl language. The presentation is written with the Application Presentation Language, or APL, which is new and custom-built for templates. It is defined on the APL page in the iApp TM wiki. The help is written in HTML, and is used to guide users in the use of the template. I’ll focus on the presentation first, and then the implementation. I’ll forego the help section in this article. Presentation The reason I’m starting with the presentation section of the template is that the implementation section’s Tcl variables reflect the presentation methods naming conventions. I want to accomplish a few things in the template presentation: Ask users for the frequency of backups (daily, weekly, monthly) If weekly, ask for the day of the week If monthly, ask for the day of the month and provide a warning about days 29-31 For all frequencies, ask for the hour and minute the backup should occur The APL code for this looks like this: section time_select { choice day_select display "large" { "Daily", "Weekly", "Monthly" } optional ( day_select == "Weekly" ) { choice dow_select display "medium" { "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday" } } optional ( day_select == "Monthly" ) { message dom_warning "The day of the month should be the 1st-28th. Selecting the 29th-31st will result in missed backups on some months." choice dom_select display "medium" tcl { for { set x 1 } { $x < 32 } { incr x } { append dom "$x\n" } return $dom } } choice hr_select display "medium" tcl { for { set x 0 } { $x < 24 } { incr x} { append hrs "$x\n" } return $hrs } choice min_select display "medium" tcl { for { set x 0 } { $x < 60 } { incr x } { append mins "$x\n" } return $mins } } text { time_select "Backup Schedule" time_select.day_select "Choose the frequency the backup should occur:" time_select.dow_select "Choose the day of the week the backup should occur:" time_select.dom_warning "WARNING: " time_select.dom_select "Choose the day of the month the backup should occur:" time_select.hr_select "Choose the hour the backup should occur:" time_select.min_select "Choose the minute the backup should occur:" } A few things to point out. First, the sections (which can’t be nested) provide a way to set apart functional differences in your form. I only needed one here, but it’s very useful if I were to build on and add options for selecting a UCS or SCF format, or specifying a mail address for the backups to be mailed to. Second, order matters. The objects will be displayed in the template as you define them. Third, the optional command allows me to hide questions that wouldn’t make sense given previous answers. If you dig into some of the canned templates shipping with v11, you’ll also see another use case for the optional command. Fourth, you can use Tcl commands to populate fields for you. This can be generated data like I did above, or you can loop through configuration objects to present in the template as well. Finally, the text section is where you define the language you want to appear with each of your objects. The nomenclature here is section.variable. To give you an idea what this looks like, here is a screenshot of a monthly backup configuration: Once my template (f5.archiving) is saved, I can configure it in the Application Services section by selecting the template. At this point, I have a functioning presentation, but with no implementation, it’s effectively useless. Implementation Now that the presentation is complete, I can move on to an implementation. I need to do a couple things in the implementation: Grab the data entered into the application service Convert the day of week information from long name to the appropriate 0-6 (or 1-7) number for cron Use that data to build a cron file (statically assigned at this point to /etc/cron.d/f5backups) Here is the implementation section: array set dow_map { Sunday 0 Monday 1 Tuesday 2 Wednesday 3 Thursday 4 Friday 5 Saturday 6 } set hr $::time_select__hr_select set min $::time_select__min_select set infile [open "/etc/cron.d/f5backups" "w" "0755"] puts $infile "SHELL=\/bin\/bash" puts $infile "PATH=\/sbin:\/bin:\/usr\/sbin:\/usr\/bin" puts $infile "#MAILTO=user@somewhere" puts $infile "HOME=\/var\/tmp\/" if { $::time_select__day_select == "Daily" } { puts $infile "$min $hr * * * root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } elseif { $::time_select__day_select == "Weekly" } { puts $infile "$min $hr * * $dow_map($::time_select__dow_select) root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } elseif { $::time_select__day_select == "Monthly" } { puts $infile "$min $hr $::time_select__dom_select * * root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } close $infile A few notes: The dow_map array is to convert the selected day (ie, Saturday) to a number for cron (6). The variables in the implementation section reference the data supplied from the presentation like so: $::<section>__<presentation variable name> (Note the double underscore between them. As such, DO NOT use double underscores in your presentation variables) tmsh special characters need to be escaped if you’re using them for strings. A succesful configuration of the application service results in this file configuration for /etc/cron.d/f5backups: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin #MAILTO=user@somewhere HOME=/var/tmp/ 54 15 * * * root /bin/bash /var/local/bin/f5backup.sh 1>/var/tmp/f5backup.log 2>&1 So this backup is scheduled to run daily at 15:54. This is confirmed with this directory listing on my BIG-IP: [root@golgotha:Active] bin # ls -las /var/local/backups total 2168 8 drwx------ 2 root root 4096 Aug 16 15:54 . 8 drwxr-xr-x 9 root root 4096 Aug 3 14:44 .. 1076 -rw-r--r-- 1 root root 1091639 Aug 15 15:54 f5backup-golgotha.test.local-20110815155401.tar.bz2 1076 -rw-r--r-- 1 root root 1092259 Aug 16 15:54 f5backup-golgotha.test.local-20110816155401.tar.bz2 Conclusion This is just scratching the surface of what can be done with the new iApp TM feature in v11. I didn’t even cover the ability to use presentation and implementation libraries, but that will be covered in due time. If you’re impatient, there are already several examples (including this one here) in the codeshare. Related Articles F5 DevCentral > Community > Group Details - iApp iApp Wiki Home - DevCentral Wiki F5 Agility 2011 - James Hendergart on iApp iApp Codeshare - DevCentral Wiki iApp Lab 5 - Priority Group Activation - DevCentral Wiki iApp Template Development Tips and Techniques - DevCentral Wiki Lori MacVittie - iApp2.9KViews0likes13CommentsSelective Client Cert Authentication
SSL encryption on the web is not a new concept to the general population of the internet. Those of us that frequent many websites per week (day, hour, minute, etc.) are quite used to making use of SSL encryption for security purposes. It's an accepted standard, and we're all fairly used to dealing with it in varied capacities. Whether it's that nifty yellow URL bar in Firefox, or the security warning saying that portions of the site you're going to are unencrypted, we've likely seen it before, and are comfortable with it in day to day operation. What if, however, I wanted to get more out of my certificates? One of the more common, behind the scenes things that gets done with certificates is authentication. Via client-cert authentication users can have a "passwordless" user experience, automatic authentication into multiple apps with different access levels, and a smooth browsing experience with the applications in question. Combine these nice to have features with improved security, as it's much harder to spoof a client-cert than it is a password, and it's not surprising we're seeing a fair amount of companies putting this type of authentication into place. That's all well and good, but what if you don't want your entire site to be authenticated this way? What if you only want users trying to access certain portions of the site to be required to present a valid client-cert? What's more, what if you need to pass along some of the information from the certificate to the back end application? Extracting things like the issuer, subject and version can be necessary in some of these situations. That's a fair amount of application layer overhead to put on your application servers - inspecting every client request, determining the intended location, negotiating client-cert authentication if necessary, passing that info on, etc. etc. Wouldn't it be nice if you could not only offload all of this overhead, but the management overhead of the setup as well? As is often the case, with iRules, you can. With the below example iRule not only can you selectively require a certificate from the inbound users depending on, in this case the requested URI, but you can also extract valuable cert information from the client and insert it into HTTP headers to be passed back to the application servers for whatever processing needs they might have. This allows you to fine-tune the user experience of your application or site for those users who need access via client-cert authentication, but not affect those that don't. You can even custom define the actions for the iRule to take in the case that a user requests a URI that requires authentication, but doesn't have the appropriate cert. There is a little configuration that needs to be done, like setting up a Client SSL profile to decrypt the SSL traffic coming in, but that should be simple enough. The iRule itself is pretty straight-forward. It uses the matchclass command to compare the URI to a list of known URIs that require authentication (class not shown in the example). If it finds a match, it uses the SSL commands to check for and require a certificate. Once this is found it uses the X509 commands to poll cert information and include it in some custom HTTP headers that the back end servers can look for. when CLIENTSSL_CLIENTCERT { HTTP::release if { [SSL::cert count] < 1 } { reject } } when HTTP_REQUEST { if { [matchclass [HTTP::uri] starts_with $::requires_client_cert] } { if { [SSL::cert count] <= 0 } { HTTP::collect SSL::authenticate always SSL::authenticate depth 9 SSL::cert mode require SSL::renegotiate } } } when HTTP_REQUEST_SEND { clientside { if { [SSL::cert count] > 0 } { HTTP::header insert "X-SSL-Session-ID"[SSL::sessionid] HTTP::header insert "X-SSL-Client-Cert-Status"[X509::verify_cert_error_string [SSL::verify_result]] HTTP::header insert "X-SSL-Client-Cert-Subject"[X509::subject [SSL::cert 0]] HTTP::header insert "X-SSL-Client-Cert-Issuer"[X509::issuer [SSL::cert 0]] } } } As you can see there is a fair amount of room for further customization, as was partly mentioned above. Things like dealing with custom error pages or routing for requests that should require authentication but don't provide a cert, allowing different levels of access based on the cert information collected, etc. All in all this iRule represents a relatively simple solution to a more complex problem and does so in a manner that's easy to implement and maintain. That's the power of iRules, in a nutshell. Get the Flash Player to see this player.2.7KViews1like6Comments