adn
71 TopicsRequiring an SSL Certificate for Parts of an Application
When building many enterprise web-based applications, security must be taken seriously. iRules provide powerful capabilities for influencing security decisions when processing for your web services and applications. This is a rule for requiring a client certificate for certain parts of an application. The example below requires a certificate when the URL path is for a certain directory. Alternatively, a rule could be written to check the host name of file extension if that is more appropriate for your requirements. Special Notes: this rule requires version 9.0.2 to operate correctly. Log statements are commented out. For testing, they can be uncommented. When the client connects, we set up variables to record two things - whether a certificate has been received and whether a certficate needs to be received. These variables start out with a value of zero, which means "false". when CLIENT_ACCEPTED { set needcert 0 set gotcert 0 } When a client does an SSL handshake, this rule event is triggered. This is the time to validate that a certificate has been received. If a certificate has not been received, but we were expecting one ($needcert == 1), then the connection is rejected. If the certificate has been received, we note that for future reference (set gotcert 1) and we release the current request (HTTP::release) if we were waiting for a certificate before releasing the request. when CLIENTSSL_HANDSHAKE { # log LOCAL0.warn "cert count=[SSL::cert count] result=[SSL::verify_result]" if { [SSL::cert count] == 0 or [SSL::verify_result] != 0 } { # log LOCAL0.warn "Bad cert!" if { $needcert == 1 } { reject } } else { # log LOCAL0.warn "Good cert! ($needcert)" set gotcert 1 if { $needcert == 1 } { HTTP::release } } } Here we process an HTTP request. If the request is for a directory that has been designated for extra security, then several things happen. We freeze the HTTP request until the client certificate is received, we tell SSL to require a certificate, we tell SSL to renegotiate now, and then we set a flag that indicates we need a certificate. when HTTP_REQUEST { if { $gotcert == 0 and [HTTP::uri] starts_with "/needcert" } { # log LOCAL0.warn "Requiring certificate..." HTTP::collect SSL::cert mode require SSL::renegotiate set needcert 1 } else { # log LOCAL0.warn "No cert needed." } } Questions about this iRule? Post them in the Technical Forum.466Views0likes0CommentsOffload Authentication with iRules
As the applications being driven by webservers become more and more complex, Applications Developers are always looking for ways to increase efficiency or do away with unneeded processing time. One of the ways that I believe that Applications can do that is by making use of an intelligent network infrastructure. Assuming the network that is delivering the applications is an intelligent, application aware one, there are many things that Developers can do to help offload some of the work their code would normally have the web server processing to the network level. One such thing that can be offloaded in many cases, is authentication. By leaving the heavy lifting of ensuring only authorized users are acessing the application(s) in question to the network, the web server is free to use its processing power to deliver the application it's hosting faster and more reliably. This is especially true in highly taxed environments. The example below shows one way in which this can be done. This example uses an HTTP cookie to store authentication information for each user, which is a common practice for many applications. It is getting the data to be stored, I.E. whether or not a user is properly authenticated, by enlisting the authentication server already in place on this hypothetical network. In this specific example, that authentication system is a Radius system, but the iRule works equally well with LDAP, tacacs, etc. If the authentication attempt is successful, a cookie will be sent to client with the appropriate data to be stored. The next time that client tries to access the application, the AUTH cookie is present and valid, so the client will be passed immediately without being re-checked for authentication. If it is not succesful, well, then you can decide what experience that user should have by altering the code in the "AUTH_FAILURE" section, or leave the standard 401 error message that stands there now. In this example, the cookie name, password, domain should be properly modified for your environment. This code comes from the DevCentral iRules CodeShare where you can find many useful iRules examples, not to mention post your own to share with the community. Just another great example of how with iRules, you can... 😉 when CLIENT_ACCEPTED { set authinsck 0 set forceauth 1 set ckname BIGXAUTH set ckpass 1xxx5678 set ckvalue [IP::client_addr] set ckdomain .y.z set asid [AUTH::start pam default_radius] } when HTTP_REQUEST { if {[HTTP::cookie exists $ckname]} { HTTP::cookie decrypt $ckname $ckpass 128 if {[HTTP::cookie value $ckname] eq $ckvalue} { set forceauth 0 } HTTP::cookie remove $ckname } if {$forceauth eq 1} { AUTH::username_credential $asid [HTTP::username] AUTH::password_credential $asid [HTTP::password] AUTH::authenticate $asid HTTP::collect } } when HTTP_RESPONSE { if {$authinsck eq 1} { HTTP::cookie insert name $ckname value $ckvalue path / domain $ckdomain HTTP::cookie secure $ckname enable HTTP::cookie encrypt $ckname $ckpass 128 } } when AUTH_SUCCESS { if {$asid eq [AUTH::last_event_session_id]} { set authinsck 1 HTTP::release } } when AUTH_FAILURE { if {$asid eq [AUTH::last_event_session_id]} { HTTP::respond 401 "WWW-Authenticate" "Basic realm=\"\"" } } when AUTH_WANTCREDENTIAL { if {$asid eq [AUTH::last_event_session_id]} { HTTP::respond 401 "WWW-Authenticate" "Basic realm=\"\"" } } when AUTH_ERROR { if {$asid eq [AUTH::last_event_session_id]} { HTTP::respond 401 } }423Views0likes0CommentsThese Are Not The Scrapes You're Looking For - Session Anomalies
In my first article in this series, I discussed web scraping -- what it is, why people do it, and why it could be harmful. My second article outlined the details of bot detection and how the ASM blocks against these pesky little creatures. This last article in the series of web scraping will focus on the final part of the ASM defense against web scraping: session opening anomalies and session transaction anomalies. These two detection modes are new in v11.3, so if you're using v11.2 or earlier, then you should upgrade and take advantage of these great new features! ASM Configuration In case you missed it in the bot detection article, here's a quick screenshot that shows the location and settings of the Session Opening and Session Transactions Anomaly in the ASM. You'll find all the fun when you navigate to Security > Application Security > Anomaly Detection > Web Scraping. There are three different settings in the ASM for Session Anomaly: Off, Alarm, and Alarm and Block. (Note: these settings are configured independently...they don't have to be set at the same value) Obviously, if Session Anomaly is set to "Off" then the ASM does not check for anomalies at all. The "Alarm" setting will detect anomalies and record attack data, but it will allow the client to continue accessing the website. The "Alarm and Block" setting will detect anomalies, record the attack data, and block the suspicious requests. Session Opening Anomaly The first detection and prevention mode we'll discuss is Session Opening Anomaly. But before we get too deep into this, let's review what a session is. From a simple perspective, a session begins when a client visits a website, and it ends when the client leaves the site (or the client exceeds the session timeout value). Most clients will visit a website, surf around some links on the site, find the information they need, and then leave. When clients don't follow a typical browsing pattern, it makes you wonder what they are up to and if they are one of the bad guys trying to scrape your site. That's where Session Opening Anomaly defense comes in! Session Opening Anomaly defense checks for lots of abnormal activities like clients that don't accept cookies or process JavaScript, clients that don't scrape by surfing internal links in the application, and clients that create a one-time session for each resource they consume. These one-time sessions lead scrapers to open a large number of new sessions in order to complete their job quickly. What's Considered A New Session? Since we are discussing session anomalies, I figured we should spend a few sentences on describing how the ASM differentiates between a new or ongoing session for each client request. Each new client is assigned a "TS cookie" and this cookie is used by the ASM to identify future requests from the client with a known, ongoing session. If the ASM receives a client request and the request does not contain a TS cookie, then the ASM knows the request is for a new session. This will prove very important when calculating the values needed to determine whether or not a client is scraping your site. Detection There are two different methods used by the ASM to detect these anomalies. The first method compares a calculated value to a predetermined ceiling value for newly opened sessions. The second method considers the rate of increase of newly opened sessions. We'll dig into all that in just a minute. But first, let's look at the criteria used for detecting these anomalies. As you can see from the screenshot above, there are three detection criteria the ASM uses...they are: Sessions opened per second increased by: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second increases by a given percentage. The default setting is 500 percent. Sessions opened per second reached: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second is greater than or equal to this number. The default value is 400 sessions opened per second. Minimum sessions opened per second threshold for detection: This specifies that the ASM considers traffic to be an attack if the number of sessions opened per second is greater than or equal to the number specified. In addition, at least one of the "Sessions opened per second increased by" or "Sessions opened per second reached" numbers must also be reached. If the number of sessions opened per second is lower than the specified number, the ASM does not consider this traffic to be an attack even if one of the "Sessions opened per second increased by" or "Sessions opened per second" reached numbers was reached. The default value for this setting is 200 sessions opened per second. In addition, the ASM maintains two variables for each client IP address: a one-minute running average of new session opening rate, and a one-hour running average of new session opening rate. Both of these variables are recalculated every second. Now that we have all the basic building blocks. let's look at how the ASM determines if a client is scraping your site. First Method: Predefined Ceiling Value This method uses the user-defined "minimum sessions opened per second threshold for detection" value and compares it to the one-minute running average. If the one-minute average is less than this number, then nothing else happens because the minimum threshold has not been met. But, if the one-minute average is higher than this number, the ASM goes on to compare the one-minute average to the user-defined "sessions opened per second reached" value. If the one-minute average is less than this value, nothing happens. But, if the one-minute average is higher than this value, the ASM will declare the client a web scraper. The following flowchart provides a pictorial representation of this process. Second Method: Rate of Increase The second detection method uses several variables to compare the rate of increase of newly opened sessions against user-defined variables. Like the first method, this method first checks to make sure the minimum sessions opened per second threshold is met before doing anything else. If the minimum threshold has been met, the ASM will perform a few more calculations to determine if the client is a web scraper or not. The "sessions opened per second increased by" value (percentage) is multiplied by the one-hour running average and this value is compared to the one-minute running average. If the one-minute average is greater, then the ASM declares the client a web scraper. If the one-minute average is lower, then nothing happens. The following matrix shows a few examples of this detection method. Keep in mind that the one-minute and one-hour averages are recalculated every second, so these values will be very dynamic. Prevention The ASM provides several policies to prevent session opening anomalies. It begins with the first method that you enable in this list. If the system finds this method not effective enough to stop the attack, it uses the next method that you enable in this list. The following screenshots show the different options available for prevention. The "Drop IP Addresses with bad reputation" is tied to Rate Limiting, so it will not appear as an option unless you enable Rate Limiting. Note that IP Address Intelligence must be licensed and enabled. This feature is licensed separately from the other ASM web scraping options. Here's a quick breakdown of what each of these prevention policies do for you: Client Side Integrity Defense: The system determines whether the client is a legal browser or an illegal script by sending a JavaScript challenge to each new session request from the detected IP address, and waiting for a response. The JavaScript challenge will typically involve some sort of computational challenge. Legal browsers will respond with a TS cookie while illegal scripts will not. The default for this feature is disabled. Rate Limiting: The goal of Rate Limiting is to keep the volume of new sessions at a "non-attack" level. The system will drop sessions from suspicious IP addresses after the system determines that the client is an illegal script. The default for this feature is also disabled. Drop IP Addresses with bad reputation: The system drops requests from IP addresses that have a bad reputation according to the system’s IP Address Intelligence database (shown above). The ASM will drop all request from any "bad" IP addresses even if they respond with a TS cookie. IP addresses that do not have a bad reputation also undergo rate limiting. The default for this option is disabled. Keep in mind that this option is available only after Rate Limiting is enabled. In addition, this option is only enforced if at least one of the IP Address Intelligence Categories is set to Alarm mode. Prevention Duration Now that we have detected session opening anomalies and mitigated them using our prevention options, we must figure out how long to apply the prevention measures. This is where the Prevention Duration comes in. This setting specifies the length of time that the system will prevent an attack. The system prevents attacks by rejecting requests from the attacking IP address. There are two settings for Prevention Duration: Unlimited: This specifies that after the system detects and stops an attack, it performs attack prevention until it detects the end of the attack. This is the default setting. Maximum <number of> seconds: This specifies that after the system detects and stops an attack, it performs attack prevention for the amount of time indicated unless the system detects the end of the attack earlier. So, to finish up our Session Opening Anomaly part of this article, I wanted to share a quick scenario. I was recently reading several articles from some of the web scrapers around the block, and I found one guy's solution to work around web scraping defense. Here's what he said: "Since the service conducted rate-limiting based on IP address, my solution was to put the code that hit their service into some client-side JavaScript, and then send the results back to my server from each of the clients. This way, the requests would appear to come from thousands of different places, since each client would presumably have their own unique IP address, and none of them would individually be going over the rate limit." This guy is really smart! And, this would work great against a web scraping defense that only offered a Rate Limiting feature. Here's the pop quiz question: If a user were to deploy this same tactic against the ASM, what would you do to catch this guy? I'm thinking you would need to set your minimum threshold at an appropriate level (this will ensure the ASM kicks into gear when all these sessions are opened) and then the "sessions opened per second" or the "sessions opened per second increased by" should take care of the rest for you. As always, it's important to learn what each setting does and then test it on your own environment for a period of time to ensure you have everything tuned correctly. And, don't forget to revisit your settings from time to time...you will probably need to change them as your network environment changes. Session Transactions Anomaly The second detection and prevention mode is Session Transactions Anomaly. This mode specifies how the ASM reacts when it detects a large number of transactions per session as well as a large increase of session transactions. Keep in mind that web scrapers are designed to extract content from your website as quickly and efficiently as possible. So, web scrapers normally perform many more transactions than a typical application client. Even if a web scraper found a way around all the other defenses we've discussed, the Session Transaction Anomaly defense should be able to catch it based on the sheer number of transactions it performs during a given session. The ASM detects this activity by counting the number of transactions per session and comparing that number to a total average of transactions from all sessions. The following screenshot shows the detection and prevention criteria for Session Transactions Anomaly. Detection How does the ASM detect all this bad behavior? Well, since it's trying to find clients that surf your site much more than other clients, it tracks the number of transactions per client session (note: the ASM will drop a session from the table if no transactions are performed for 15 minutes). It also tracks the average number of transactions for all current sessions (note: the ASM calculates the average transaction value every minute). It can use these two figures to compare a specific client session to a reasonable baseline and figure out if the client is performing too many transactions. The ASM can automatically figure out the number of transactions per client, but it needs some user-defined thresholds to conduct the appropriate comparisons. These thresholds are as follows: Session transactions increased by: This specifies that the system considers traffic to be an attack if the number of transactions per session increased by the percentage listed. The default setting is 500 percent. Session transactions reached: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number. The default value is 400 transactions. Minimum session transactions threshold for detection: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number, and at least one of the "Sessions transactions increased by" or "Session transactions reached" numbers was reached. If the number of transactions per session is lower than this number, the system does not consider this traffic to be an attack even if one of the "Session transactions increased by" or "Session transaction reached" numbers was reached. The default value is 200 transactions. The following table shows an example of how the ASM calculates transaction values (averages and individual sessions). We would expect that a given client session would perform about the same number of transactions as the overall average number of transactions per session. But, if one of the sessions is performing a significantly higher number of transactions than the average, then we start to get suspicious. You can see that session 1 and session 3 have transaction values higher than the average, but that only tells part of the story. We need to consider a few more things before we decide if this client is a web scraper or not. By the way, if the ASM knows that a given session is malicious, it does not use that session's transaction numbers when it calculates the average. Now, let's roll in the threshold values that we discussed above. If the ASM is going to declare a client as a web scraper using the session transaction anomaly defense, the session transactions must first reach the minimum threshold. Using our default minimum threshold value of 200, the only session that exceeded the minimum threshold is session 3 (250 > 200). All other sessions look good so far...keep in mind that these numbers will change as the client performs additional transactions during the session, so more sessions may be considered as their transaction numbers increase. Since we have our eye on session 3 at this point, it's time to look at our two methods of detecting an attack. The first detection method is a simple comparison of the total session transaction value to our user-defined "session transactions reached" threshold. If the total session transactions is larger than the threshold, the ASM will declare the client a web scraper. Our example would look like this: Is session 3 transaction value > threshold value (250 > 400)? No, so the ASM does not declare this client as a web scraper. The second detection method uses the "transactions increased by" value along with the average transaction value for all sessions. The ASM multiplies the average transaction value with the "transactions increased by" percentage to calculate the value needed for comparison. Our example would look like this: 90 * 500% = 450 transactions Is session 3 transaction value > result (250 > 450)? No, so the ASM does not declare this client as a web scraper. By the way, only one of these detection methods needs to be met for the ASM to declare the client as a web scraper. You should be able to see how the user-defined thresholds are used in these calculations and comparisons. So, it's important to raise or lower these values as you need for your environment. Prevention Duration In order to save you a bunch of time reading about prevention duration, I'll just say that the Session Transactions Anomaly prevention duration works the same as the Session Opening Anomaly prevention duration (Unlimited vs Maximum <number of> seconds). See, that was easy! Conclusion Thanks for spending some time reading about session anomalies and web scraping defense. The ASM does a great job of detecting and preventing web scrapers from taking your valuable information. One more thing...for an informative anomaly discussion on the DevCentral Security Forum, check out this conversation. If you have any questions about web scraping or ASM configurations, let me know...you can fill out the comment section below or you can contact the DevCentral team at https://devcentral.f5.com/s/community/contact-us.942Views2likes2Commentsv11.1: DNS Blackhole with iRules
Back in October, I attended a Security B-Sides event in Jefferson City (review here). One of the presenters (@bethayoung) talked about poisoning the internal DNS intentionally for known purveyors of all things bad. I indicated in my write-up that I’d be detailing an F5-based solution, and whereas a few weeks has turned into a couple months, well, here we are. As much as I had hoped to get it all together on my own, F5er Hugh O’Donnell beat me to it, and did a fantastic job. F5er Lee Orrick also contributed to the solution and I’ll have more from him in a future article. Conceptual Overview Before jumping into the nuts and bolts, I’d like to describe the solution. First, consider normal operation: Joe Anonymous is surfing and hits a popular page that has been compromised. He hits a link for a cute video about puppies and rainbows and NOT SO FAST MY FRIEND! Instead of said cute puppies and rainbows video, he ends up with a nasty case of malware and his friendly neighborhood IT staff gets to spend some time remediating the damage—if it’s caught at all. See, DNS is if not the backbone of the internet, at least several of the vertebrae. And it does its job very well. Asked and answered. Done. If you hit a link with a malicious domain, there’s a very very good chance your DNS server will have no safeguards in place, it’ll answer away. This is what a blackhole DNS solution is configured to overcome. The networking folks in the audience will be familiar with blackhole routing, and this is really no different a concept. When a user makes a query, the service inspects the destination, and if it matches a list of well known badness, it returns an address of an internal site where remediation or at least notification can take place. In either event, the request is not hitting the malicious destination, which protects user and organization. See Figure 1 for the flow detail. Building the Datagroup As with iFiles in v11.1, datagroups can also be imported via the GUI and then referenced similarly. To import your blacklisted domains (there’s a big list here: mirror1.malwaredomains.com), make sure your text editor is set for line feed terminator only (CR-LF won’t work) and use this format for each entry: “.abbcp.cn” := “harmful”, “.3dglases-panasonic-tv.com” := “zeusv2”, The first field is the domain, and the second field is a type description. The first will match your traffic, the second is strictly for classification purposes and can be edited as necessary. Intercepting the DNS Requests This solution can be implemented with LTM or GTM, though if the latter, the iRule will still need to be attached to the virtual server associated with the wideIP instead of the wideIP itself. In this article, I’ll implement the LTM-based solution. As I’ll be utilizing the new DNS:: commands, a DNS profile will need to be attached to the virtual server as well as the iRule below. Note that the blackhole class (named appropriately Blackhole_Class in the iRule below) should be present on the system for this solution to work. # Author: Hugh O'Donnell, F5 Consulting when RULE_INIT { # Set IPV4 address that is returned for Blackhole matches for A records set static::blackhole_reply_IPV4 "10.10.20.50" # Set IPV6 address that is returned for Blackhole matches for AAAA records set static::blackhole_reply_IPV6 "2001:19b8:101:2::f5f5:1d" # Set TTL used for all Blackhole replies set static::blackhole_ttl "300" } when DNS_REQUEST { # debugging statement see all questions and request details # log -noname local0. "Client: [IP::client_addr] Question:[DNS::question name] Type:[DNS::question type] Class:[DNS::question class] Origin:[DNS::origin]" # Blackhole_Match is used to track when a Query matches the blackhole list # Ensure it is always set to 0 or false at beginning of the DNS request set Blackhole_Match 0 # Blackhole_Type is used to track why this FQDN was added to the Blackhole_Class set Blackhole_Type "" # When the FQDN from the DNS Query is checked against the Blackhole class, the FQDN must start with a # period. This ensures we match a FQDN and all names to the left of it. This prevents against # malware that dynamically prepends characters to the domain name in order to bypass exact matches if {!([DNS::question name] == ".")} { set fqdn_name .[DNS::question name] } if { [class match $fqdn_name ends_with Blackhole_Class] } { # Client made a DNS request for a Blackhole site. set Blackhole_Match 1 set Blackhole_Type [class match -value $fqdn_name ends_with Blackhole_Class ] # Prevent processing by GTM, DNS Express, BIND and GTM Listener's pool. # Want to ensure we don't request a prohibited site and allow their server to identify or track the GTM source IP. DNS::return } } when DNS_RESPONSE { # debugging statement to see all questions and request details # log -noname local0. "Request: $fqdn_name Answer: [DNS::answer] Origin:[DNS::origin] Status: [DNS::header rcode] Flags: RD [DNS::header rd] RA [DNS::header ra]" if { $Blackhole_Match } { # This DNS request was for a Blackhole FQDN. Take different actions based on the request type. switch [DNS::question type] { "A" { # Clear out any DNS responses and insert the custom response. RA header = recursive answer DNS::answer clear DNS::answer insert "[DNS::question name]. $static::blackhole_ttl [DNS::question class] [DNS::question type] $static::blackhole_reply_IPV4" DNS::header ra "1" # log example: Apr 3 14:54:23 local/tmm info tmm[4694]: # Blackhole: 10.1.1.148#4902 requested foo.com query type: A class IN A-response: 10.1.1.60 log -noname local0. "Blackhole: [IP::client_addr]#[UDP::client_port] requested [DNS::question name] query type: [DNS::question type] class [DNS::question class] A-response: $static::blackhole_reply_IPV4 BH type: $Blackhole_Type" } "AAAA" { # Clear out any DNS responses and insert the custom response. RA header = recursive answer DNS::answer clear DNS::answer insert "[DNS::question name]. $static::blackhole_ttl km[DNS::question class] [DNS::question type] $static::blackhole_reply_IPV6" DNS::header ra "1" # log example: Apr 3 14:54:23 local/tmm info tmm[4694]: # Blackhole: 10.1.1.148#4902 requested foo.com query type: A class IN AAAA-response: 2001:19b8:101:2::f5f5:1d log -noname local0. "Blackhole: [IP::client_addr]#[UDP::client_port] requested [DNS::question name] query type: [DNS::question type] class [DNS::question class] AAAA-response: $static::blackhole_reply_IPV6 BH type: $Blackhole_Type" } default { # For other record types, e.g. MX, NS, TXT, etc, provide a blank NOERROR response DNS::last_act reject # log example: Apr 3 14:54:23 local/tmm info tmm[4694]: # Blackhole: 10.1.1.148#4902 requested foo.com query type: A class IN unable to respond log -noname local0. "Blackhole: [IP::client_addr]#[UDP::client_port] requested [DNS::question name] query type: [DNS::question type] class [DNS::question class] unable to respond BH type: $Blackhole_Type" } } } } This iRule handles the DNS request, responding on behalf of GTM or any DNS servers being load balanced by LTM. And since we’re handling the blackhole site, we can serve that up as well from an iRule on an HTTP virtual server. Serving the Remediation Page The remediation page can be as simple as a text message indicating malware, or it can be a little more complex to show the category of the problem site as well as provide some contact information. The iRule below is an example of the latter. # Author: Hugh O’Donnell, F5 Consulting when HTTP_REQUEST { # the static HTML pages include the logo that is referenced in HTML as corp-logo.gif # intercept requests for this and reply with the image that is stored in an iFile defined in RULE_INIT below if {[HTTP::uri] ends_with "/_maintenance-page/corp-logo.png" } { # Present HTTP::respond 200 content $static::corp_logo } else { # Request for Blackhole webpage. Identify what type of block was in place switch -glob [class match -value ".[HTTP::host]" ends_with Blackhole_Class ] { "virus" { set block_reason "Virus site" } "phishing" { set block_reason "Phishing site" } "generic" { set block_reason "Unacceptable Usage" } default { set block_reason "Denied Per Policy - Other Sites" } } # Log details about the blackhole request to the remote syslog server log -noname local0. "Blackhole: From [IP::client_addr]:[TCP::client_port] \ to [IP::local_addr]:[TCP::local_port], [HTTP::request_num], \ [HTTP::method],[HTTP::uri],[HTTP::version], [HTTP::host], [HTTP::header value Referer], \ [HTTP::header User-Agent], [HTTP::header names],[HTTP::cookie names], BH category: $block_reason," # Send an HTML page to the user. The page is defined in the RULE_INIT event below HTTP::respond 200 content "$static::block_page [HTTP::host][HTTP::uri] $static::after_url $block_reason $static::after_block_reason " } } when RULE_INIT { # load the logo that was stored as an iFile set static::corp_logo [ifile get "/Common/f5ball"] # Beginning of the block page set static::block_page " <html lang=\"en_US\"> <head> <title>Web Access Denied - Enterprise Network Operations Center</title> <meta http-equiv=\"Content-Type\" content=\"text/html; charset=us-ascii\"> <meta http-equiv=\"CACHE-CONTROL\" content=\"NO-CACHE\"> <meta http-equiv=\"PRAGMA\" content=\"NO-CACHE\"> <meta http-equiv=\"EXPIRES\" content=\"Mon, 22 Jul 2002 11:12:01 GMT\"> <style> <!-- .mainbody { background-color: #C0C0C0; color: #000000; font-family: Verdana, Geneva, sans-serif; font-size: 12px; margin: 0px; padding: 20px 0px 20px 0px; position: relative; text-align: center; width: 100%; } .bdywrpr { width:996px; height:auto; text-align:left; margin:0 auto; z-index:1; position: relative; } #banner-wrapper { width: 950px; padding: 0px; margin: 0px; overflow:hidden; background-color: #FFFFFF; background-repeat: no-repeat; } #banner-image { float: left; margin-left: auto; margin-right: auto; padding: 3px 0px 2px 7px; width: 950px; } #textbody { background-color: #FFFFFF; color: #000000; font-family: Verdana, Geneva, sans-serif; font-size: 13px; width: 950px; padding:0px; text-align:justify; margin: 0px; } --> </style> </head> <body class=\"mainbody\"> <div class=\"bdywrpr\"> <div id=\"banner-wrapper\"> <!-- BANNER --> <div id=\"banner-image\"> <center><img src=\"/_maintenance-page/corp-logo.png\" alt=\"Enterprise Network Operations Center\"></center> </div> </div> <div id=\"textbody\"> <table border=\"0\" cellpadding=\"40\"><tr><td> <center><p style=\"font-size:18px\"><b>Access has been denied.<br><br> URL: " set static::after_url "</p></center></b></font> <br> Your request was denied because it is blacklisted in DNS. <br><br> Blacklist category: " set static::after_block_reason "<br> <p> The Internet Gateways are for official use only. Misuse violates policy. If you believe that this site is categorized incorrectly, and that you have a valid business reason for access to this site please contact your manager for approval and the Enterprise Network Operations Center via <br><br> E-mail: <a href=\"mailto:enoc@example.com\">enoc@example.com</a> <br><br> Please use the Web Access Request Form and include a business justification. Only e-mail that originates from valid internal e-mail addresses will be processed. If you do not have a valid e-mail address, your manager will need to submit a request on your behalf. </center> <p> <font size=-1><i>Generated by bigip1.f5.com.</i></font> </td></tr></table> </div> </div> </body> </html> " } Note that the remediation page references an iFile for a logo. For details on configuring iFiles, please reference my article on iFiles. Also note that in addition to the client getting a heads-up notification of malfeasance, the visit is logged so other processes, individuals can act on the information. The Results First, our DNS query and response. Rather than test out a real well-known bad site, I added espn.com to my blacklist so if I forgot a step and leaked through to the real site I wouldn’t compromise anything. The response from my DNS virtual server is shown in Figure 2 below. You can see that the address matches the address set in the iRule as our blackhole IPv4 address. Also, the log information from that DNS query: Dec 28 15:35:08 tmm info tmm[6883]: Blackhole: 10.10.20.251#57714 requested espn.com query type: A class IN A-response: 10.10.20.50 BH type: sports Next, the resulting remediation page in my browser (Figure 3): And finally, the log entry from the HTTP request: Dec 28 15:35:08 tmm info tmm[6883]: Blackhole: From 10.10.20.251:32447 to 10.10.20.50:80, 1, GET,/,1.1, espn.com, , Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.63 Safari/535.7, Host Connection User-Agent Accept Accept-Encoding Accept-Language Accept-Charset,, BH category: Denied Per Policy - Other Sites, Conclusion This is a wicked application of iRules with new DNS and file handling features delivered in v11.1. If you wanted to take it even further, you could use sideband connections and reference an external list instead of a datagroup that will need constant refreshing. The GTM version of this solution is documented in CrowdSRC. If you’re curious about the DNS commands used in the iRule above, I’ll be discussing them in my next tech tip, so check back soon! Note:For the LTMsolution presented above, the DNSServices module or the GTM module is required to be licensed.1.3KViews0likes7CommentsMultiple Certs, One VIP: TLS Server Name Indication via iRules
An age old question that we’ve seen time and time again in the iRules forums here on DevCentral is “How can I use iRules to manage multiple SSL certs on one VIP"?”. The answer has always historically been “I’m sorry, you can’t.”. The reasoning is sound. One VIP, one cert, that’s how it’s always been. You can’t do anything with the connection until the handshake is established and decryption is done on the LTM. We’d like to help, but we just really can’t. That is…until now. The TLS protocol has somewhat recently provided the ability to pass a “desired servername” as a value in the originating SSL handshake. Finally we have what we’ve been looking for, a way to add contextual server info during the handshake, thereby allowing us to say “cert x is for domain x” and “cert y is for domain y”. Known to us mortals as "Server Name Indication" or SNI (hence the title), this functionality is paramount for a device like the LTM that can regularly benefit from hosting multiple certs on a single IP. We should be able to pull out this information and choose an appropriate SSL profile now, with a cert that corresponds to the servername value that was sent. Now all we need is some logic to make this happen. Lucky for us, one of the many bright minds in the DevCentral community has whipped up an iRule to show how you can finally tackle this challenge head on. Because Joel Moses, the shrewd mind and DevCentral MVP behind this example has already done a solid write up I’ll quote liberally from his fine work and add some additional context where fitting. Now on to the geekery: First things first, you’ll need to create a mapping of which servernames correlate to which certs (client SSL profiles in LTM’s case). This could be done in any manner, really, but the most efficient both from a resource and management perspective is to use a class. Classes, also known as DataGroups, are name->value pairs that will allow you to easily retrieve the data later in the iRule. Quoting Joel: Create a string-type datagroup to be called "tls_servername". Each hostname that needs to be supported on the VIP must be input along with its matching clientssl profile. For example, for the site "testsite.site.com" with a ClientSSL profile named "clientssl_testsite", you should add the following values to the datagroup. String: testsite.site.com Value: clientssl_testsite Once you’ve finished inputting the different server->profile pairs, you’re ready to move on to pools. It’s very likely that since you’re now managing multiple domains on this VIP you'll also want to be able to handle multiple pools to match those domains. To do that you'll need a second mapping that ties each servername to the desired pool. This could again be done in any format you like, but since it's the most efficient option and we're already using it, classes make the most sense here. Quoting from Joel: If you wish to switch pool context at the time the servername is detected in TLS, then you need to create a string-type datagroup called "tls_servername_pool". You will input each hostname to be supported by the VIP and the pool to direct the traffic towards. For the site "testsite.site.com" to be directed to the pool "testsite_pool_80", add the following to the datagroup: String: testsite.site.com Value: testsite_pool_80 If you don't, that's fine, but realize all traffic from each of these hosts will be routed to the default pool, which is very likely not what you want. Now then, we have two classes set up to manage the mappings of servername->SSLprofile and servername->pool, all we need is some app logic in line to do the management and provide each inbound request with the appropriate profile & cert. This is done, of course, via iRules. Joel has written up one heck of an iRule which is available in the codeshare (here) in it's entirety along with his solid write-up, but I'll also include it here in-line, as is my habit. Effectively what's happening is the iRule is parsing through the data sent throughout the SSL handshake process and searching for the specific TLS servername extension, which are the bits that will allow us to do the profile switching magic. He's written it up to fall back to the default client SSL profile and pool, so it's very important that both of these things exist on your VIP, or you may likely find yourself with unhappy users. One last caveat before the code: Not all browsers support Server Name Indication, so be careful not to implement this unless you are very confident that most, if not all, users connecting to this VIP will support SNI. For more info on testing for SNI compatibility and a list of browsers that do and don't support it, click through to Joel's awesome CodeShare entry, I've already plagiarized enough. So finally, the code. Again, my hat is off to Joel Moses for this outstanding example of the power of iRules. Keep at it Joel, and thanks for sharing! 1: when CLIENT_ACCEPTED { 2: if { [PROFILE::exists clientssl] } { 3: 4: # We have a clientssl profile attached to this VIP but we need 5: # to find an SNI record in the client handshake. To do so, we'll 6: # disable SSL processing and collect the initial TCP payload. 7: 8: set default_tls_pool [LB::server pool] 9: set detect_handshake 1 10: SSL::disable 11: TCP::collect 12: 13: } else { 14: 15: # No clientssl profile means we're not going to work. 16: 17: log local0. "This iRule is applied to a VS that has no clientssl profile." 18: set detect_handshake 0 19: 20: } 21: 22: } 23: 24: when CLIENT_DATA { 25: 26: if { ($detect_handshake) } { 27: 28: # If we're in a handshake detection, look for an SSL/TLS header. 29: 30: binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen 31: 32: # TLS is the only thing we want to process because it's the only 33: # version that allows the servername extension to be present. When we 34: # find a supported TLS version, we'll check to make sure we're getting 35: # only a Client Hello transaction -- those are the only ones we can pull 36: # the servername from prior to connection establishment. 37: 38: switch $tls_version { 39: "769" - 40: "770" - 41: "771" { 42: if { ($tls_xacttype == 22) } { 43: binary scan [TCP::payload] @5c tls_action 44: if { not (($tls_action == 1) && ([TCP::payload length] > $tls_recordlen)) } { 45: set detect_handshake 0 46: } 47: } 48: } 49: default { 50: set detect_handshake 0 51: } 52: } 53: 54: if { ($detect_handshake) } { 55: 56: # If we made it this far, we're still processing a TLS client hello. 57: # 58: # Skip the TLS header (43 bytes in) and process the record body. For TLS/1.0 we 59: # expect this to contain only the session ID, cipher list, and compression 60: # list. All but the cipher list will be null since we're handling a new transaction 61: # (client hello) here. We have to determine how far out to parse the initial record 62: # so we can find the TLS extensions if they exist. 63: 64: set record_offset 43 65: binary scan [TCP::payload] @${record_offset}c tls_sessidlen 66: set record_offset [expr {$record_offset + 1 + $tls_sessidlen}] 67: binary scan [TCP::payload] @${record_offset}S tls_ciphlen 68: set record_offset [expr {$record_offset + 2 + $tls_ciphlen}] 69: binary scan [TCP::payload] @${record_offset}c tls_complen 70: set record_offset [expr {$record_offset + 1 + $tls_complen}] 71: 72: # If we're in TLS and we've not parsed all the payload in the record 73: # at this point, then we have TLS extensions to process. We will detect 74: # the TLS extension package and parse each record individually. 75: 76: if { ([TCP::payload length] >= $record_offset) } { 77: binary scan [TCP::payload] @${record_offset}S tls_extenlen 78: set record_offset [expr {$record_offset + 2}] 79: binary scan [TCP::payload] @${record_offset}a* tls_extensions 80: 81: # Loop through the TLS extension data looking for a type 00 extension 82: # record. This is the IANA code for server_name in the TLS transaction. 83: 84: for { set x 0 } { $x < $tls_extenlen } { incr x 4 } { 85: set start [expr {$x}] 86: binary scan $tls_extensions @${start}SS etype elen 87: if { ($etype == "00") } { 88: 89: # A servername record is present. Pull this value out of the packet data 90: # and save it for later use. We start 9 bytes into the record to bypass 91: # type, length, and SNI encoding header (which is itself 5 bytes long), and 92: # capture the servername text (minus the header). 93: 94: set grabstart [expr {$start + 9}] 95: set grabend [expr {$elen - 5}] 96: binary scan $tls_extensions @${grabstart}A${grabend} tls_servername 97: set start [expr {$start + $elen}] 98: } else { 99: 100: # Bypass all other TLS extensions. 101: 102: set start [expr {$start + $elen}] 103: } 104: set x $start 105: } 106: 107: # Check to see whether we got a servername indication from TLS. If so, 108: # make the appropriate changes. 109: 110: if { ([info exists tls_servername] ) } { 111: 112: # Look for a matching servername in the Data Group and pool. 113: 114: set ssl_profile [class match -value [string tolower $tls_servername] equals tls_servername] 115: set tls_pool [class match -value [string tolower $tls_servername] equals tls_servername_pool] 116: 117: if { $ssl_profile == "" } { 118: 119: # No match, so we allow this to fall through to the "default" 120: # clientssl profile. 121: 122: SSL::enable 123: } else { 124: 125: # A match was found in the Data Group, so we will change the SSL 126: # profile to the one we found. Hide this activity from the iRules 127: # parser. 128: 129: set ssl_profile_enable "SSL::profile $ssl_profile" 130: catch { eval $ssl_profile_enable } 131: if { not ($tls_pool == "") } { 132: pool $tls_pool 133: } else { 134: pool $default_tls_pool 135: } 136: SSL::enable 137: } 138: } else { 139: 140: # No match because no SNI field was present. Fall through to the 141: # "default" SSL profile. 142: 143: SSL::enable 144: } 145: 146: } else { 147: 148: # We're not in a handshake. Keep on using the currently set SSL profile 149: # for this transaction. 150: 151: SSL::enable 152: } 153: 154: # Hold down any further processing and release the TCP session further 155: # down the event loop. 156: 157: set detect_handshake 0 158: TCP::release 159: } else { 160: 161: # We've not been able to match an SNI field to an SSL profile. We will 162: # fall back to the "default" SSL profile selected (this might lead to 163: # certificate validation errors on non SNI-capable browsers. 164: 165: set detect_handshake 0 166: SSL::enable 167: TCP::release 168: 169: } 170: } 171: }3.8KViews0likes18CommentsF5 Security on Owasp Top 10
Everyone is familiar with the Owasp Top 10. Below, you will find some notes on the Top 10, as well as ways to mitigate these potential threats to your environment. You can also download the PDF format by clicking the blankie ––> This is the first in a series that will cover the attack vectors and how to apply the protection methods. OWASP Attack OWASP DEFINITION F5 PROTECTION A1 Injection Injection flaws, such as SQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. BIG-IP ASM inspects application traffic and blocks the insertion of malicious scripts. It does so by enforcing injection attack patterns, enforcing an accurate usage of metacharacters within the URI and parameter names. ASM also looks at parameter values and can enforce pre-defined allowed values, length and accurate usage of metacharacters. A2 Cross-Site Scripting (XSS) XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation and escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites. BIG-IP ASM protects against Cross-Site Scripting attacks by enforcing XSS attack patterns, enforcing an accurate usage of metacharacters within the URI and parameter names. ASM also looks at parameter values and can enforce pre-defined allowed values, length and accurate usage of metacharacters. A3 Broken Authentication and Session Management Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, session tokens, or exploit other implementation flaws to assume other users’ identities. BIG-IP ASM enables protection by: • Using ASM’s unique login page enforcement configuration • Enforcing login page timeouts • Enabling application flow enforcement and dynamic parameter protection • Using SSL on the login page • Monitoring request attack patterns • Using ASM signed cookies so none are being manipulated A4 Insecure Direct Object References A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory,or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. If a hacker changes his account number to another random number hoping to access a different user’s account they can manipulate those references to access other objects without authorization. These can include: • Fraud (price changes, user ID changes) • Session highjacking • Enforcing parameter values with high parameters BIG-IP ASM mitigates this vulnerability by enforcing dynamic parameters (making sure values that were set by the server will not be changed on the client side). Also the admin. can whitelist the allowed URLs for the specific application and scan the requests with attack patterns. A5 Cross-Site Request Forgery (CSRF) A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim. BIG-IP ASM mitigates CSRF attacks by adding a random nonce to every URL. This nonce cannot be guessed in advance by an attacker and therefore makes the attack almost impossible. In addition, ASM is preventing XSS within an application and enforcing the application flow and dynamic parameter values. With flow access, a session timeout can be combined with an F5 iRule™ designed to note referrer header check to minimize CSRF. For instance, flow enforcement mitigates CSRF by limiting the entry points or web pages of attacks along with session timeouts being short. If referring to say www.food.com, ASM checks the referrer header in the URL to make sure it’s food.com. A6 Security Misconfiguration Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. All these settings should be defined, implemented, and maintained as many are not shipped with secure defaults. This includes keeping all software up to date, including all code libraries used by the application. BIG-IP ASM can mitigate attacks that are related to misconfiguration by using a broad range of controls starting with: • RFC enforcement • Enforcing various limits on the requests • Whitelisting the URLs and parameters names and values • Enforcing a login page • Being a native full reverse proxy A7 Insecure Cryptographic Storage Many web applications do not properly protect sensitive data, such as credit cards, SSNs, and authentication credentials, with appropriate encryption or hashing. Attackers may steal or modify such weakly protected data to conduct identity theft, credit card fraud, or other crimes. While this isn’t directly related to BIG-IP ASM or WAF, OWASP is mostly concerned with what type of encryption is used and how it is used. These are both outside of the enforcement purview of ASM; however, ASM delivers the following: • Data Guard - if someone managed to cause an information leakage, Data Guard can block it • BIG-IP certificate management allows the user to store private keys in a central and secure place. A8 Failure to Restrict URL Access Many web applications check URL access rights before rendering protected links and buttons. However, applications need to perform similar access control checks each time these pages are accessed, or attackers will be able to forge URLs to access these hidden pages anyway. There are multiple ways that BIG-IP ASM can mitigate this issue. , ASM enforces allowed file types and URLs, and accurate parameter values and login pages. BIG-IP ASM’s “flow” technology ensures that site content is only accessed by users that have acquired the proper credentials or visited the prerequisite pages. Users can only visit personal web pages if they have come from the say a user ID and password sign on web page. A9 Insufficient Transport Layer Protection Applications frequently fail to authenticate, encrypt, and protect the confidentiality and integrity of sensitive network traffic. When they do, they sometimes support weak algorithms, use expired or invalid certificates, or do not use them correctly. BIG-IP ASM significantly simplifies the implementation of SSL and certificate management by centralizing the location and administration of the server certificates in a single location rather than distributed over farms of servers. Also, by moving SSL handshaking and encryption to BIG-IP ASM, the Web servers gain an increased level of performance and efficiency. In addition ASM allows you to do the following : • Require SSL for all sensitive pages. Non-SSL requests to these pages redirected to the SSL page. Use BIG-IP SSL Acceleration in general for the whole application • Set the ‘secure’ flag on all sensitive cookies • Configure your SSL provider to only support strong (e.g., FIPS 140-2 compliant) algorithms. (Use BIG-IP 6900, 8900) • Ensure your certificate is valid, not expired, not revoked, and matches all domains used by the site. You can check with EM or scripts from Devcentral • Backend and other connections should also use SSL or other encryption technologies. Use re-encryption with Server-SSL-profile A10 Unvalidated Redirects and Forwards Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages. BIG-IP ASM mitigates this issue by enforcing unique attack patterns, enforcing accurate values of parameters and enforcing dynamic parameters.2.7KViews0likes8CommentsCodeShare Refresh: HTTP Session Limit
The iRules CodeShare on DevCentral is an amazingly powerful, diverse collection of iRules that perform a myriad of tasks ranging from credit card scrubbing to form based authentication to, as in today's example, limiting the number of HTTP sessions allowed. While the codeshare is outstanding, it is a collection of code that has been contributed over the last several years. As such, some of it is written for older versions, like 9.x, where we didn't have some of the powerful, efficient commands and tools that we do currently within iRules. That is where the idea for a CodeShare Refresh series came from...getting those older v9-esque rules moved into modern times with table commands, static namespace variables, out of band connections and all of the other added benefits that come along with the more modern codebase. We'll be digging through the CodeShare fishing out old rules and reviving them, then posting them back for future generations of DevCentral users to leverage. We'll also try to comment on why we're making the changes that we are, so you can see what has changed between versions in the process of us updating the code. With that, let's get started. First I'll post the older snippet, then the updated version, ready to go into the wild in v11.x. The new rule in its entirety and the link to the older version can be found below. Previously, in pre CMP compliant v9 iRules, it was relatively common place to set global variables. This is a big no-no now, as it demotes connections out of CMP, which is a large performance hit. So while the old iRule's RULE_INIT section looked like: 1: when RULE_INIT { 2: set ::total_active_clients 0 3: set ::max_active_clients 100 4: log local0. "rule session_limit initialized: total/max: $::total_active_clients/$::max_active_clients" 5: } The newer version updated for v11 looks like: 1: when RULE_INIT { 2: set static::max_active_clients 100 3: } Note the use of the static:: namespace. This is a place to safely store static information globally available form that will not interfere with CMP. These values are, as the namespace indicates, static, but it's extremely valuable in many cases like this one where we're setting a cap for the number of clients that we want to allow. Also note that there is no active clients counter at all, due to the fact that we've changed things later on in the iRule. As a result of this it made no sense to log the initialization line from the older iRule either, so we've trimmed the RULE_INIT event down a bit. Next up, the first half of the HTTP_REQUEST event, in which the max_active_clients is compared to the current number of active clients. First the v9 code from the CodeShare: 1: when HTTP_REQUEST { 2: ;# test cookie presence 3: if {[HTTP::cookie exists "ClientID"]} { 4: set need_cookie 0 5: set client_id [HTTP::cookie "ClientID"] 6: ;# if cookie not present & connection limit not reached, set up client_id 7: } else { 8: if {$::total_active_clients < $::max_active_clients} { 9: set need_cookie 1 10: set client_id [format "%08d" [expr { int(100000000 * rand()) }]] Now the v11 code: 1: when HTTP_REQUEST { 2: # test cookie presence 3: if {[HTTP::cookie exists "ClientID"]} { 4: set need_cookie 0 5: set client_id [HTTP::cookie "ClientID"] 6: # if cookie not present & connection limit not reached, set up client_id 7: } else { 8: if {[table keys -subtable httplimit] < $static::max_active_clients} { 9: set need_cookie 1 10: set client_id [format "%08d" [expr { int(100000000 * rand()) }]] The only change here is a pretty notable one: Out with global variables, in with session tables! Here we introduce the table command, which was released with v10, that gives us extremely efficient access to the session table. In this iRule all we need is a counter, so we're using a subtable called httplimit and adding a new record to that subtable for each session coming in. Then, with the table keys command we can quickly and efficiently count the number of rows in that table, which gives us the number of HTTP sessions currently active for this VIP. Note that the rest of the code stayed the same. There are many ways to do things in iRules, but I'm trying not to fiddle with the logic or execution of the rules in this series more than necessary to update them for the newer versions. So now that we're using the table command to do the lookups, we should likely use it to set and increment the counter as well. That occurs in the other half of the HTTP_REQUEST event. First v9: 1: # Only count this request if it's the first on the TCP connection 2: if {[HTTP::request_num] == 1}{ 3: incr ::total_active_clients 4: } 5: ;# otherwise redirect 6: else { 7: HTTP::redirect "http://sorry.domain.com/" 8: return 9: 10: } 11: } Again you can see the use of the global variable, and the incr command. Next is the v11 update: 1: # Only count this request if it's the first on the TCP connection 2: if {[HTTP::request_num] == 1}{ 3: table set -subtable httplimit [IP::client_addr]:[TCP::client_port] "blocked" 4: set timer [after 60000 -periodic { table lookup -subtable httplmit [IP::client_addr]:[TCP::client_port] 5: } else { 6: HTTP::redirect "http://sorry.domain.com/" 7: event CLIENT_CLOSED disable 8: return 9: } 10: } As you can see things here have changed quite a bit. First of all, here is the way we're using the table command to increment our counter. Rather than keeping a single counter, we are adding rows to the session table in a particular subtable, as I mentioned before. We're using the client's IP & port to give us a unique identifier for that client. This allows us to do the table keys lookup to count the number of active clients. We're also instantiating a timer here. Using the after -periodic command we are setting up a loop (non blocking) that will touch the entry we've just created every 60 seconds. This is because the entry in the session table has a timeout of 180 seconds, which is the default. Now...we could have made that entry permanent, but that's not what we want. When counting things using an event based structure it's important to take into account cases where a particular event might not fire. While it's rare, there are technically cases where the CLIENT_CLOSED event may not fire if circumstances are just right. In that case, using the old structure with just a simple counter, the count would be off and could drift. This timer, which you'll see in the last section is terminated in CLIENT_CLOSED along with removing the entry for this session in the table (effectively decrementing the counter), ensure that even if something wonky happens, the count will resolve and remain accurate. A bit of a concept to wrap around, but a solid one, and this introduces far less overhead than you'd gain back by moving this rule to CMP. Also note that we're disabling the CLIENT_CLOSED event if the user is over their limit. This is to ensure that the counter for their IP/port combo isn't decremented. Next is the HTTP_RESPONSE event, which remains entirely unchanged, so the v9 & v11 versions are the same: 1: when HTTP_RESPONSE { 2: # insert cookie if needed 3: if {$need_cookie == 1} { 4: HTTP::cookie insert name "ClientID" value $client_id path "/" 5: } 6: } And last, but not least, the CLIENT_CLOSED event. First v9, with our simple counter and the nearly infamous incr -1: 1: when CLIENT_CLOSED { 2: ;# decrement current connection counter for this client_id 3: if {$::total_active_clients > 0} { 4: incr ::total_active_clients -1 5: } 6: } And now the updated version for v11: 1: when CLIENT_CLOSED { 2: # decrement current connection counter for this client_id 3: after cancel $timer 4: table delete -subtable httplimit [IP::client_addr]:[TCP::client_port] 5: } The two main things to note here are that we're not doing an if, since we have confidence that we're not going to drop the counter below 0, since that's not possible with this method, and the way we're decrementing things. Note that we're not decrementing at all. We're deleting the row out of the subtable that represents the current HTTP Session. As such, it won't be counted on the next iteration, and poof...decremented counter. We're also dumping the periodic after that we spun up in the HTTP_REQUEST section to keep our entry pinned up in the session table until the session actually terminated. So there you have it, a freshly updated version of the HTTP Session Limiting iRule out of the CodeShare. Hopefully this is helpful and we'll continue refreshing this valuable content for the ever expanding DevCentral community. Here is the complete v11 iRule, which can also be found in the CodeShare: 1: when RULE_INIT { 2: set static::max_active_clients 100 3: } 4: 5: when HTTP_REQUEST { 6: # test cookie presence 7: if {[HTTP::cookie exists "ClientID"]} { 8: set need_cookie 0 9: set client_id [HTTP::cookie "ClientID"] 10: # if cookie not present & connection limit not reached, set up client_id 11: } else { 12: if {not ([table keys -subtable httplimit] > $static::max_active_clients)} { 13: set need_cookie 1 14: set client_id [format "%08d" [expr { int(100000000 * rand()) }]] 15: 16: # Only count this request if it's the first on the TCP connection 17: if {[HTTP::request_num] == 1}{ 18: table set -subtable httplimit [IP::client_addr]:[TCP::client_port] "blocked" 19: set timer [after 60000 -periodic { table lookup -subtable httplmit [IP::client_addr]:[TCP::client_port] } 20: } 21: } else { 22: HTTP::redirect "http://sorry.domain.com/" 23: event CLIENT_CLOSED disable 24: return 25: } 26: } 27: } 28: 29: when HTTP_RESPONSE { 30: # insert cookie if needed 31: if {$need_cookie == 1} { 32: HTTP::cookie insert name "ClientID" value $client_id path "/" 33: } 34: } 35: 36: when CLIENT_CLOSED { 37: # decrement current connection counter for this client_id 38: after cancel $timer 39: table delete -subtable httplimit [IP::client_addr]:[TCP::client_port] 40: }811Views0likes12CommentsWeb Application Login Integration with APM
As we hurtle forward through the information age we continue to find ourselves increasingly dependant on the applications upon which we rely. Whether it's your favorite iPhone app or the tools that allow you to do your job, the applications that you interact with and the information contained within are a sacred thing to the average IT warrior. As such, we hold the security of said applications and data in high regard. This leads to entire security teams dreaming up more complex ways to protect your information along more hoops for you to jump through to gain access. Their intentions are pure and the added barriers to entry by way of complex login processes, password security requirements, access enforcement, etc. are justified, but that makes them no less cumbersome. For the remote users the saga is even worse. While the local application user gripes about the latest policy enforced for their own good by the security team the remote user deals with the same restrictions and then some, often including the added step of having to "remote in" via VPN of some sort before even being allowed access to the resources they require. Fortunately though, it doesn't have to be this way. One of the many features F5's Access Policy Manager (APM) offers is seamless integration into your existing application or portal. By configuring APM to accept credentials passed via a simple POST from any application, it can truly be a transparent gateway into your authentication infrastructure. With this configuration, whether the user is internal or external, simply logging into the portal/app in question will pass their credentials to APM which can them seamlessly assign them to the appropriate resources, granting or limiting their access as necessary. The setup is simple. Let's walk through it, assuming we have a basic Virtual IP set up on the APM, and our application has a simple login form, something like: Note that the action on the above form is https://10.10.12.8/fake. This is a "fake" URI on the APM device that's going to be doing the heavy lifting here (The Virtual Server's IP on the APM is 10.10.12.8). The form will look like you would expect, like this: Now that we have a form to play with we'll need to capture the credentials being passed when a user submits the form to log in. This is done via a simple, flexible iRule. We'll input the "/fake" as the URI we're searching for, then do some straight-forward logic to identify the username and password in the POST from the form. Once we have that data singled out, we'll store it in session variables for later use in APM's auth process. The whole thing looks like this: Once the iRule is created and assigned to the APM virtual (10.10.12.8, from above) we should have the data collected and stored as needed. Next we need to instruct APM to access those session variables we set earlier. To do so we'll use the Visual Policy Editor built into APM to make constructing login flows simple. First add an empty event to check for the credentials. We'll call that "have creds". Next, be sure you have some form of authentication set up to actually authenticate the credentials passed in via the form once the "have creds" event succeeds, otherwise anyone will be able to log in, and that's not what we're looking for. For that we've set up a standard AD Auth event in APM which will fire and perform the actual authentication step once we've successfully collected the data from the POST. The setup in the VPE should look something like this: Now that we have the basic structure in place, let's make it do something. Create a new Branch Rule (also named have creds for this example, appearing on the line between have creds and AD Auth) that makes use of an advanced check. We'll tell it to look for the session.logon.last.username variable that we set in the iRule to ensure that the rule was successful. If this check is successful it means that the iRule found a username in the POST from the form and we can safely pass the username and password variables on to the AD Auth event for processing. So, assuming the username and password entered in the original HTML form are correct, once you hit "Logon" to submit the form, your information will be passed back through APM for authentication via AD, and then assigned whatever resources are deemed appropriate for that user based on the "Resource Assign" action. There you have it, login integration to assign resource access. This could of course be done with any login form, not just the simple HTML example above, to save users one more hoop to jump through before getting access to the applications they depend on.1.1KViews0likes6CommentsTwo-Factor Authentication With Google Authenticator And APM
Introduction Two-factor authentication (TFA) has been around for many years and the concept far pre-dates computers. The application of a keyed padlock and a combination lock to secure a single point would technically qualify as two-factor authentication: “something you have,” a key, and “something you know,” a combination. Until the past few years, two-factor authentication in its electronic form has been reserved for high security environments: government, banks, large companies, etc. The most common method for implementing a second authentication factor has been to issue every employee a disconnected time-based one-time password hard token. The term “disconnected” refers to the absence of a connection between the token and a central authentication server. A “hard token” implies that the device is purpose-built for authentication and serves no other purpose. A soft or “software” token on the other hand has other uses beyond providing an authentication mechanism. In the context of this article we will refer to mobile devices as a soft tokens. This fits our definition as the device an be used to make phone calls, check email, surf the Internet, all in addition to providing a time-based one-time password. A time-based one-time password (TOTP) is a single use code for authenticating a user. It can be used by itself or to supplement another authentication method. It fits the definition of “something you have” as it cannot be easily duplicated and reused elsewhere. This differs from a username and password combination, which is “something you know,” but could be easily duplicated by someone else. The TOTP uses a shared secret and the current time to calculate a code, which is displayed for the user and regenerated at regular intervals. Because the token and the authentication server are disconnected from each other, the clocks of each must be perfectly in sync. This is accomplished by using Network Time Protocol (NTP) to synchronize the clocks of each device with the correct time of central time servers. Using Google Authenticator as a soft token application makes sense from many angles. It is low cost due to the proliferation of smart phones and is available from the “app store” free of charge on all major platforms. It uses an open standard (defined by RFC 4226), which means that it is well-tested, understood, secure. Calculation as you will later see is well-documented and relatively easy to implement in your language of choice (iRules in our case). This process is explained in the next section. This Tech Tip is a follow-up to Two-Factor Authentication With Google Authenticator And LDAP. The first article in this series highlighted two-factor authentication with Google Authenticator and LDAP on an LTM. In this follow-up, we will be covering implementation of this solution with Access Policy Manager (APM). APM allows for far more granular control of network resources via access policies. Access policies are rule sets, which are intuitively displayed in the UI as flow charts. After creation, an access policy is applied to a virtual server to provide security, authentication services, client inspection, policy enforcement, etc. This article highlights not only a two-factor authentication solution, but also the usage of iRules within APM policies. By combining the extensibility of iRules with the APM’s access policies, we are able to create virtually any functionality we might need. Note: A 10-user fully-featured APM license is included with every LTM license. You do not need to purchase an additional module to use this feature if you have less than 10 users. Calculating The Google Authenticator TOTP The Google Authenticator TOTP is calculated by generating an HMAC-SHA1 token, which uses a 10-byte base32-encoded shared secret as a key and Unix time (epoch) divided into a 30 second interval as inputs. The resulting 80-byte token is converted to a 40-character hexadecimal string, the least significant (last) hex digit is then used to calculate a 0-15 offset. The offset is then used to read the next 8 hex digits from the offset. The resulting 8 hex digits are then AND’d with 0x7FFFFFFF (2,147,483,647), then the modulo of the resultant integer and 1,000,000 is calculated, which produces the correct code for that 30 seconds period. Base32 encoding and decoding were covered in my previous Tech Tip titled Base32 Encoding And Decoding With iRules . The Tech Tip details the process for decoding a user’s base32-encoded key to binary as well as converting a binary key to base32. The HMAC-SHA256 token calculation iRule was originally submitted by Nat to the Codeshare on DevCentral. The iRule was slightly modified to support the SHA-1 algorithm, but is otherwise taken directly from the pseudocode outlined in RFC 2104. These two pieces of code contribute the bulk of the processing of the Google Authenticator code. The rest is done with simple bitwise and arithmetic functions. Triggering iRules From An APM Access Policy Our previously published Google Authenticator iRule combined the functionality of Google Authenticator token verification with LDAP authentication. It was written for a standalone LTM system without the leverage of APM’s Visual Policy Editor. The issue with combining these two authentication factors in a single iRule is that their functionality is not mutually exclusive or easily separable. We can greatly reduce the complexity of our iRule by isolating functionality for Google Authenticator token verification and moving the directory server authentication to the APM access policy. APM iRules differ from those that we typically develop for LTM. iRules assigned to LTM virtual server are triggered by events that occur during connection or payload handling. Many of these events still apply to an LTM virtual server with an APM policy, but do not have perspective into the access policy. This is where we enter the realm of APM iRules. APM iRules are applied to a virtual server exactly like any other iRule, but are triggered by custom iRule event agent IDs within the access policy. When the access policy reaches an iRule event, it will trigger the ACCESS_POLICY_AGENT_EVENT iRule event. Within the iRule we can execute the ACCESS::policy agent_id command to return the iRule event ID that triggered the event. We can then match on this ID string prior to executing any additional code. Within the iRule we can get and set APM session variables with the ACCESS::session command, which will serve as our conduit for transferring variables to and from our access policy. A visual walkthrough of this paragraph is shown below. iRule Trigger Process Create an iRule Event in the Visual Policy Editor Specify a Name for the object and an ID for the Custom iRule Event Agent Create an iRule with the ID referenced and assign it to the virtual server 1: when ACCESS_POLICY_AGENT_EVENT { 2: if { [ACCESS::policy agent_id] eq "ga_code_verify" } { 3: # get APM session variables 4: set username [ACCESS::session data get session.logon.last.username] 5: 6: ### Google Authenticator token verification (code omitted for brevity) ### 7: 8: # set APM session variables 9: ACCESS::session data set session.custom.ga_result $ga_result 10: } 11: } Add branch rules to the iRule Event which read the custom session variable and handle the result Google Authenticator Two-Factor Authentication Process Two-Factor Authentication Access Policy Overview Rather than walking through the entire process of configuring the access policy from scratch, we’ll look at the policy (available for download at the bottom of this Tech Tip) and discuss the flow. The policy has been simplified by creating macros for the redundant portions of the authentication process: Google Authenticator token verification and the two-factor authentication processes for LDAP and Active Directory. The “Google Auth verification” macro consists of an iRule event and 5 branch rules. The number of branch rules could be reduced to just two: success and failure. This would however limit our diagnostic capabilities should we hit a snag during our deployment, so we added logging for all of the potential failure scenarios. Remember that these logs are sent to APM reporting (Web UI: Access Policy > Reports) not /var/log/ltm. APM reporting is designed to provide per-session logging in the user interface without requiring grepping of the log files. The LDAP and Active Directory macros contain the directory server authentication and query mechanisms. Directory server queries are used to retrieve user information from the directory server. In this case we can store our Google Authenticator key (shared secret) in a schema attribute to remove a dependency from our BIG-IP. We do however offer the ability to store the key in a data group as well. The main portion of the access policy is far simpler and easier to read by using macros. When the user first enters our virtual server we look at the Landing URI they are requesting. A first time request will be sent to the “normal” logon page. The user will then input their credentials along with the one-time password provided by the Google Authenticator token. If the user’s credentials and one-time password match, they are allowed access. If they fail the authentication process, we increment a counter via a table in our iRule and redirect them back to an “error” logon page. The “error” logon page notifies them that their credentials are invalid. The notification makes no reference as to which of the two factors they failed. If the user exceeds the allowed number of failures for a specified period of time, their session will be terminated and they will be unable to login for a short period of time. An authenticated user would be allowed access to secured resources for the duration of their session. Deploying Google Authenticator Token Verification This solution requires three components (one optional) for deployment: Sample access policy Google Authenticator token verification iRule Google Authenticator token generation iRule (optional) The process for deploying this solution has been divided into four sections: Configuring a AAA server Login to the Web UI of your APM From the side panel select Access Policy > AAA Servers > Active Directory, then the + next to the text to create a new AD server Within the AD creation form you’ll need to provide a Name, Domain Controller, Domain Name, Admin Username, and Admin Password When you have completed the form click Finished Copy the iRule to BIG-IP and configure options Download a copy of the Google Authenticator Token Verification iRule for APM from the DevCentral CodeShare (hint: this is much easier if you “edit” the wiki page to display the source without the line numbers and formatting) Navigate to Local Traffic > iRules > iRule List and click the + symbol Name the iRule '”google_auth_verify_apm,” then copy and paste the iRule from the CodeShare into the Definition field At the top of the iRule there are a few options that need to be defined: lockout_attempts - number of attempts a user is allowed to make prior to being locked out temporarily (default: 3 attempts) lockout_period - duration of lockout period (default: 30 seconds) ga_code_form_field - name of HTML form field used in the APM logon page, this field is define in the "Logon Page" access policy object (default: ga_code_attempt) ga_key_storage - key storage method for users' Google Authenticator shared keys, valid options include: datagroup, ldap, or ad (default: datagroup) ga_key_ldap_attr - name of LDAP schema attribute containing users' key ga_key_ad_attr - name of Active Directory schema attribute containing users' key ga_key_dg - data group containing user := key mappings Click Finished when you’ve configured the iRule options to your liking Import sample access policy From the Web UI, select Access Policy > Access Profiles > Access Profiles List In the upper right corner, click Import Download the sample policy for Two-Factor Authentication With Google Authenticator And APM and extract the .conf from ZIP archive Fill in the New Profile Name with a name of your choosing, then select Choose File, navigate to the extracted sample policy and Open Click Import to complete the import policy The sample policy’s AAA servers will likely not work in your environment, from the Access Policy List, click Edit next to the imported policy When the Visual Policy Editor opens, expand the macro (LDAP or Active Directory auth) that describe your environment Click the AD Auth object, select the AD server from the drop-down that was defined earlier in the AAA Servers step, then click Save Repeat this process for the AD Query object Assign sample policy and iRule to a virtual server From the Web UI, select Local Traffic > Virtual Servers > Virtual Server List, then the create button (+) In the New Virtual Server form, fill in the Name, Destination address, Service Port (should be HTTPS/443), next select an HTTP profile and anSSL Profile (Client). Next you’ll add a SNAT Profile if needed, an Access Profile, and finally the token verification iRule Depending on your deployment you may want to add a pool or other network connectivity resources Finally click Finished At this point you should have a function virtual server that is serving your access policy. You’ll now need to add some tokens for your users. This process is another section on its own and is listed below. Generating Software Tokens For Users In addition to the Google Authenticator Token Verification iRule for APM we also wrote a Google Authenticator Soft Token Generator iRule that will generate soft tokens for your users. The iRule can be added directly to an HTTP virtual server without a a pool and accessed directly to create tokens. There are a few available fields in the generator: account, pre-defined secret, and a QR code option. The “account” field defines how to label the soft token within the user’s mobile device and can be useful if the user has multiple soft token on the same device (I have 3 and need to label them to keep them straight). A 10-byte string can be used as a pre-defined secret for conversion to a base32-encoded key. We will advise you against using a pre-defined key because a key known to the user is something they know (as opposed to something they have) and could be potentially regenerate out-of-band thereby nullifying the benefits of two-factor authentication. Lastly, there is an option to generate a QR code by sending an HTTPS request to Google and returning the QR code as an image. While this is convenient, this could be seen as insecure since it may wind up in Google’s logs somewhere. You’ll have to decide if that is a risk you’re willing to take for the convenience it provides. Once the token has been generated, it will need to be added to a data group on the BIG-IP: Navigate to Local Traffic > iRules > Data Group Lists Select Create from the upper right-hand corner if the data group does not yet exist. If it exists, just select it from the list. Name the data group “google_auth_keys” (data group name can be changed in the beginning section of the iRule) The type of data group will be String Type the “username” into the String field and paste the “Google Authenticator key” into the Value field Click Add and the username/key pair should appear in the list as such: user := ONSWG4TFOQYTEMZU Click Finished when all your username/key pairs have been added. Your user can scan the QR code or type it into their device manually. After they scan the QR code, the account name should appear along with the TOTP for the account. The image below is how the soft token appears in the Google Authenticator iPhone application: Once again, do not let the user leave with a copy of the plain text key. Knowing their key value will negate the value of having the token in the first place. Once the key has been added to the BIG-IP, the user’s device, and they’ve tested their access, destroy any reference to the key outside the BIG-IPs data group.If you’re worried about having the keys in plain text on the BIG-IP, they can be encrypted with AES or stored off-box in LDAP and only queried via secure connection. This is beyond the scope of this article, but doable with iRules. Code Google Authenticator Token Verification iRule for APM – Documentation and code for the iRule used in this Tech Tip Google Authenticator Soft Token Generator iRule – iRule for generating soft tokens for users Sample Access Policy: Two-Factor Authentication With Google Authenticator And APM – APM access policy Reference Materials RFC 4226 - HOTP: An HMAC-Based One-Time Password Algorithm RFC 2104 - HMAC: Keyed-Hashing for Message Authentication RFC 4648 - The Base16, Base32, and Base64 Data Encodings SOL3122: Configuring the BIG-IP system to use an NTP server using the Configuration utility – Information on configuring time servers Configuration Guide for BIG-IP Access Policy Manager – The “big book” on APM configurations Configuring Authentication Using AAA Servers – Official F5 documentation for configuring AAA servers for APM Troubleshooting AAA Configurations – Extra help if you hit a snag configuring your AAA server15KViews6likes28CommentsRADIUS Load Balancing with iRules
What is RADIUS? “Remote Authentication Dial In User Service” or RADIUS is a very mature and widely implemented protocol for exchanging ”Triple A” or “Authentication, Authorization and Accounting” information. RADIUS is a relatively simple, transactional protocol. Clients, such as remote access server, FirePass, BIG-IP, etc. originate RADIUS requests (for example, to authenticate a user based on a user/password combination) and then wait for a response from the RADIUS server. Information is exchanged between a RADIUS client and server in the form of attributes. User-name, user-password, IP Address, port, and session state are all examples of attributes. Attributes can be in the format of text, string, IP address, integer or timestamp. Some attributes are variable in length, some are fixed. Why is protocol-specific support valuable? In typical UDP Load Balancing (not protocol-specific), there is one common challenge: if a client always sends requests with the same source port, packets will never be balanced across multiple servers. This behavior is the default for a UDP profile. To allow load balancing to work in this situation, using "Datagram LB" is the common recommendation or the use of an immediate session timeout. By using Datagram LB, every packet will be balanced. However, if a new request comes in before the reply for the previous request comes back from the server, BIG-IP LTM may change source port of that new request before forwards it to the server. This may result in an application not acting properly. In this later case, “Immediate timeout” must then be used. An additional virtual server may be needed for outbound traffic in order to route traffic back to the client. In short, to enable load balancing for RADIUS transaction-based traffic coming from the same source IP/source port, Datagram LB or immediate timeout should be employed. This configuration works in most cases. However, if the transaction requires more than 2 packets (1 request, 1 response), then, further BIG-IP LTM work is needed. An example where this is important occurs in RADIUS challenge/response handshakes, which require 4 packets: * Client ---- access-request ---> Server * Client <-- access-challenge --- Server * Client --- access-request ----> Server * Client <--- access-accept ----- Server For this traffic to succeed, all packets associated with the same transaction must be returned to the same server. In this case, custom layer 7 persistence is needed. iRules can provide the needed persistency. With iRules that understand the RADIUS protocol, BIG-IP LTM can direct traffic based on any attribute sent by client or persist sessions based on any attribute sent by client or server. Session management can then be moved to the BIG-IP, reducing server-side complexity. BIG-IP can provide almost unlimited intelligence in an iRule that can even re-calculate md5, modify usernames, detect realms, etc. BIG-IP LTM can also provide security at the application level of the RADIUS protocol, rejecting malformed traffic, denial of service attacks, or similar threats using customized iRules. Solution Datagram LB UDP profile or immediate timeout may be used if requests from client always use the same source IP/port. If immediate timeout is used, there should be an additional VIP for outbound traffic originated from server to client and also an appropriate SNAT (same IP as VIP). Identifier or some attributes can be used for Universal Inspection Engine (UIE) persistence. If immediate timeout/2-side-VIP technique are used, these should be used in conjunction with session command with "any" option. iRules 1) Here is a sample iRule which does nothing except decode and log some attribute information. This is a good example of the depth of fluency you can achieve via an iRule dealing with RADIUS. when RULE_INIT { array set ::attr_code2name { 1 User-Name 2 User-Password 3 CHAP-Password 4 NAS-IP-Address 5 NAS-Port 6 Service-Type 7 Framed-Protocol 8 Framed-IP-Address 9 Framed-IP-Netmask 10 Framed-Routing 11 Filter-Id 12 Framed-MTU 13 Framed-Compression 14 Login-IP-Host 15 Login-Service 16 Login-TCP-Port 17 (unassigned) 18 Reply-Message 19 Callback-Number 20 Callback-Id 21 (unassigned) 22 Framed-Route 23 Framed-IPX-Network 24 State 25 Class 26 Vendor-Specific 27 Session-Timeout 28 Idle-Timeout 29 Termination-Action 30 Called-Station-Id 31 Calling-Station-Id 32 NAS-Identifier 33 Proxy-State 34 Login-LAT-Service 35 Login-LAT-Node 36 Login-LAT-Group 37 Framed-AppleTalk-Link 38 Framed-AppleTalk-Network 39 Framed-AppleTalk-Zone 60 CHAP-Challenge 61 NAS-Port-Type 62 Port-Limit 63 Login-LAT-Port } } when CLIENT_ACCEPTED { binary scan [UDP::payload] cH2SH32cc code ident len auth \ attr_code1 attr_len1 log local0. "code = $code" log local0. "ident = $ident" log local0. "len = $len" log local0. "auth = $auth" set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] switch $attr_code1 { 11 - 1 { binary scan [UDP::payload] @${index}a[expr $attr_len1 - 2]cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } 9 - 8 - 4 { binary scan [UDP::payload] @${index}a4cc rawip \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) =\ [IP::addr $rawip mask 255.255.255.255]" } 13 - 12 - 10 - 7 - 6 - 5 { binary scan [UDP::payload] @${index}Icc attr_value \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } default { binary scan [UDP::payload] @${index}H${hsize}cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } when SERVER_DATA { binary scan [UDP::payload] cH2SH32cc code ident len auth \ attr_code1 attr_len1 log local0. "code = $code" log local0. "ident = $ident" log local0. "len = $len" log local0. "auth = $auth" set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] switch $attr_code1 { 11 - 1 { binary scan [UDP::payload] @${index}a[expr $attr_len1 - 2]cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } 9 - 8 - 4 { binary scan [UDP::payload] @${index}a4cc rawip \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) =\ [IP::addr $rawip mask 255.255.255.255]" } 13 - 12 - 10 - 7 - 6 - 5 { binary scan [UDP::payload] @${index}Icc attr_value \ attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } default { binary scan [UDP::payload] @${index}H${hsize}cc \ attr_value attr_code2 attr_len2 log local0. " $::attr_code2name($attr_code1) = $attr_value" } } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } This iRule could be applied to many areas of interest where a particular value needs to be extracted. For example, the iRule could detect the value of specific attributes or realm and direct traffic based on that information. 2) This second iRule allows UDP Datagram LB to work with 2 factor authentication. Persistence in this iRule is based on "State" attribute (value = 24). Another great example of the kinds of things you can do with an iRule, and how deep you can truly dig into a protocol. when CLIENT_ACCEPTED { binary scan [UDP::payload] ccSH32cc code ident len auth \ attr_code1 attr_len1 set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] binary scan [UDP::payload] @${index}H${hsize}cc attr_value \ attr_code2 attr_len2 # If it is State(24) attribute... if { $attr_code1 == 24 } { persist uie $attr_value 30 return } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } when SERVER_DATA { binary scan [UDP::payload] ccSH32cc code ident len auth \ attr_code1 attr_len1 # If it is Access-Challenge(11)... if { $code == 11 } { set index 22 while { $index < $len } { set hsize [expr ( $attr_len1 - 2 ) * 2] binary scan [UDP::payload] @${index}H${hsize}cc attr_value \ attr_code2 attr_len2 if { $attr_code1 == 24 } { persist add uie $attr_value 30 return } set index [ expr $index + $attr_len1 ] set attr_len1 $attr_len2 set attr_code1 $attr_code2 } } } Conclusion With iRules, BIG-IP can understand RADIUS packets and make intelligent decisions based on RADIUS protocol information. Additionally, it is also possible to manipulate RADIUS packets to meet nearly any application need. Contributed by: Nat Thirasuttakorn Get the Flash Player to see this player.2.7KViews0likes4Comments