web service
18 TopicsF5 Security on Owasp Top 10
Everyone is familiar with the Owasp Top 10. Below, you will find some notes on the Top 10, as well as ways to mitigate these potential threats to your environment. You can also download the PDF format by clicking the blankie ––> This is the first in a series that will cover the attack vectors and how to apply the protection methods. OWASP Attack OWASP DEFINITION F5 PROTECTION A1 Injection Injection flaws, such as SQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. BIG-IP ASM inspects application traffic and blocks the insertion of malicious scripts. It does so by enforcing injection attack patterns, enforcing an accurate usage of metacharacters within the URI and parameter names. ASM also looks at parameter values and can enforce pre-defined allowed values, length and accurate usage of metacharacters. A2 Cross-Site Scripting (XSS) XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation and escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites. BIG-IP ASM protects against Cross-Site Scripting attacks by enforcing XSS attack patterns, enforcing an accurate usage of metacharacters within the URI and parameter names. ASM also looks at parameter values and can enforce pre-defined allowed values, length and accurate usage of metacharacters. A3 Broken Authentication and Session Management Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, session tokens, or exploit other implementation flaws to assume other users’ identities. BIG-IP ASM enables protection by: • Using ASM’s unique login page enforcement configuration • Enforcing login page timeouts • Enabling application flow enforcement and dynamic parameter protection • Using SSL on the login page • Monitoring request attack patterns • Using ASM signed cookies so none are being manipulated A4 Insecure Direct Object References A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory,or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. If a hacker changes his account number to another random number hoping to access a different user’s account they can manipulate those references to access other objects without authorization. These can include: • Fraud (price changes, user ID changes) • Session highjacking • Enforcing parameter values with high parameters BIG-IP ASM mitigates this vulnerability by enforcing dynamic parameters (making sure values that were set by the server will not be changed on the client side). Also the admin. can whitelist the allowed URLs for the specific application and scan the requests with attack patterns. A5 Cross-Site Request Forgery (CSRF) A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim. BIG-IP ASM mitigates CSRF attacks by adding a random nonce to every URL. This nonce cannot be guessed in advance by an attacker and therefore makes the attack almost impossible. In addition, ASM is preventing XSS within an application and enforcing the application flow and dynamic parameter values. With flow access, a session timeout can be combined with an F5 iRule™ designed to note referrer header check to minimize CSRF. For instance, flow enforcement mitigates CSRF by limiting the entry points or web pages of attacks along with session timeouts being short. If referring to say www.food.com, ASM checks the referrer header in the URL to make sure it’s food.com. A6 Security Misconfiguration Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. All these settings should be defined, implemented, and maintained as many are not shipped with secure defaults. This includes keeping all software up to date, including all code libraries used by the application. BIG-IP ASM can mitigate attacks that are related to misconfiguration by using a broad range of controls starting with: • RFC enforcement • Enforcing various limits on the requests • Whitelisting the URLs and parameters names and values • Enforcing a login page • Being a native full reverse proxy A7 Insecure Cryptographic Storage Many web applications do not properly protect sensitive data, such as credit cards, SSNs, and authentication credentials, with appropriate encryption or hashing. Attackers may steal or modify such weakly protected data to conduct identity theft, credit card fraud, or other crimes. While this isn’t directly related to BIG-IP ASM or WAF, OWASP is mostly concerned with what type of encryption is used and how it is used. These are both outside of the enforcement purview of ASM; however, ASM delivers the following: • Data Guard - if someone managed to cause an information leakage, Data Guard can block it • BIG-IP certificate management allows the user to store private keys in a central and secure place. A8 Failure to Restrict URL Access Many web applications check URL access rights before rendering protected links and buttons. However, applications need to perform similar access control checks each time these pages are accessed, or attackers will be able to forge URLs to access these hidden pages anyway. There are multiple ways that BIG-IP ASM can mitigate this issue. , ASM enforces allowed file types and URLs, and accurate parameter values and login pages. BIG-IP ASM’s “flow” technology ensures that site content is only accessed by users that have acquired the proper credentials or visited the prerequisite pages. Users can only visit personal web pages if they have come from the say a user ID and password sign on web page. A9 Insufficient Transport Layer Protection Applications frequently fail to authenticate, encrypt, and protect the confidentiality and integrity of sensitive network traffic. When they do, they sometimes support weak algorithms, use expired or invalid certificates, or do not use them correctly. BIG-IP ASM significantly simplifies the implementation of SSL and certificate management by centralizing the location and administration of the server certificates in a single location rather than distributed over farms of servers. Also, by moving SSL handshaking and encryption to BIG-IP ASM, the Web servers gain an increased level of performance and efficiency. In addition ASM allows you to do the following : • Require SSL for all sensitive pages. Non-SSL requests to these pages redirected to the SSL page. Use BIG-IP SSL Acceleration in general for the whole application • Set the ‘secure’ flag on all sensitive cookies • Configure your SSL provider to only support strong (e.g., FIPS 140-2 compliant) algorithms. (Use BIG-IP 6900, 8900) • Ensure your certificate is valid, not expired, not revoked, and matches all domains used by the site. You can check with EM or scripts from Devcentral • Backend and other connections should also use SSL or other encryption technologies. Use re-encryption with Server-SSL-profile A10 Unvalidated Redirects and Forwards Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages. BIG-IP ASM mitigates this issue by enforcing unique attack patterns, enforcing accurate values of parameters and enforcing dynamic parameters.2.6KViews0likes8CommentsMore Web Scraping - Bot Detection
In my last article, I discussed the issue of web scraping and why it could be a problem for many individuals and/or companies. In this article, we will dive into some of the technical details regarding bots and how the BIG-IP Application Security Manager (ASM) can detect them and block them from scraping your website. What Is A Bot? A bot is a software application that runs automated tasks and typically performs these tasks much faster than a human possibly could. In the context of web scraping, bots are used to extract data from websites, parse the data, and assemble it into a structured format where it can be presented in a useful form. Bots can perform many other actions as well, like submitting forms, setting up schedules, and connecting to databases. They can also do fun things like add friends to social networking sites like Twitter, Facebook, Google+, and others. A quick Internet search will show that many different bot tools are readily available for download free of charge. We won't go into the specifics of each vendor's bot application, but it's important to understand that they are out there and are very easy to use. Bot Detection So, now that we know what a bot is and what it does, how can we distinguish between malicious bot activity and harmless human activity? Well, the ASM is configured to check for some very specific activities that help it determine if the client source is a bot or a human. By the way, it's important to note that the ASM can accurately detect a human user only if clients have JavaScript enabled and support cookies. There are three different settings in the ASM for bot detection: Off, Alarm, and Alarm and Block. Obviously, if bot detection is set to "Off" then the ASM does not check for bot activity at all. The "Alarm" setting will detect bot activity and record attack data, but it will allow the client to continue accessing the website. The "Alarm and Block" setting will detect bot activity, record the attack data, and block the suspicious requests. These settings are shown in the screenshot below. Once you apply the setting for bot detection, you can then tune the ASM to begin checking for bots that are accessing your website. The bot detection utilizes four different techniques to detect and defend against bot activity. These include Rapid Surfing, Grace Interval, Unsafe Interval, and Safe Interval. Rapid Surfing detects bot activity by counting the client's page consumption speed. A page change is counted from the page load event to its unload event. The ASM configuration allows you to set a maximum number of page changes for a given time period (measured in milliseconds). If a page changes more than the maximum allowable times for the given time interval, the ASM will declare the client as a bot and perform the action that was set for bot detection (Off, Alarm, Alarm and Block). The default setting for Rapid Surfing is 5 page changes per second (or 1000 milliseconds). The Grace Interval setting specifies the maximum number of page requests the system reviews while it tries to detect whether the client is a human or a bot. As soon as the system makes the determination of human or bot it ends the Grace Interval and stops checking for bots. The default setting for the Grace Interval is 100 requests. Once the system determines that the client is valid, the system does not check the subsequent requests as specified in the Safe Interval setting. This setting allows for normal client activity to continue since the ASM has determined the client is safe (during the Grace Interval). Once the number of requests sent by the client reaches the value specified in the Safe Interval setting, the system reactivates the Grace Interval and begins the process again. The default setting for the Safe Interval is 2000 requests. This Safe Interval is nice because it lowers the processing overhead needed to constantly check every client request. If the system does not detect a valid client during the Grace Interval, the system issues and continues to issue the "Web Scraping Detected" violation until it reaches the number of requests specified in the Unsafe Interval setting. The Unsafe Interval setting specifies the number of requests that the ASM considers unsafe. Much like in the Safe Interval, after the client sends the number of requests specified in the Unsafe Interval setting, the system reactivates the Grace Interval and begins the process again. The default setting for the Unsafe Interval is 100 requests. The following figure shows the settings for Bot Detection and the values associated with each setting. Interval Timing The following picture shows a timeline of client requests and the intervals associated with each request. In the example, the first 100 client requests will fall into the Grace Interval, and during this interval the ASM will be determining whether or not the client is a bot. Let's say a bot is detected at client request 100. Then, the ASM will immediately invoke the Unsafe Interval and the next 100 requests will be issued a "Web Scraping Detected" violation. When the Unsafe Interval is complete, the ASM reverts back to the Grace Interval. If, during the Grace Interval, the system determines that the client is a human, it does not check the subsequent requests at all (during the Safe Interval). Once the Safe Interval is complete, the system moves back into the Grace Interval and the process continues. Notice that the ASM is able to detect a bot before the Grace Interval is complete (as shown in the latter part of the diagram below). As soon as the system detects a bot, it immediately moves into the Unsafe Interval...even if the Grace Interval has not reached its set threshold. Setting Thresholds As you can see from the timeline above, it's important to establish the correct thresholds for each interval setting. The longer you make the Grace Interval, the longer you give the ASM a chance to detect a bot, but keep in mind that the processing overhead can become expensive. Likewise, the Unsafe Interval setting is a great feature, but if you set it too high, your requesting clients will have to sit through a long period of violation notices before they can access your site again. Finally, the Safe Interval setting allows your users free and open access to your site. If you set this too low, you will force the system to cycle through the Grace Interval unnecessarily, but if you set it too high, a bot might have a chance to sneak past the ASM defense and scrape your site. Remember, the ASM does not check client requests at all during the Safe Interval. Also, remember the ASM does not perform web scraping detection on traffic from search engines that the system recognizes as being legitimate. If your web application has its own search engine, it's recommended that you add it to the system. Go to Security > Options > Application Security > Advanced Configuration > Search Engines and add it to the list (the ASM comes preconfigured with Ask, Bing, Google, and Yahoo already loaded). A Quick Test I loaded up a sample virtual web server (in this case it was a fictitious online auction site) and then configured the ASM to Alarm and Block bot activity on the site. Then, I fired up the iMacro plugin from Firefox to scrape the site. Using the iMacro plugin, I sent many client requests in a short amount of time. After several requests to the site, I received the response page shown below. You can see that the response page settings in the ASM are shown in the Firefox browser window when web scraping is detected. These settings can be adjusted in the ASM by navigating to Security > Application Security > Blocking > Response Pages. Well, thanks for coming back and reading about ASM bot detection. Be sure to swing by again for the final web scraping article where I will discuss session anomalies. I can't possibly think of anything that could be more fun than that!1.9KViews1like0CommentsGetting Started with Splunk for F5
Pete Silva & Lori MacVittie both had blog posts last week featuring the F5 Application for Splunk, so I thought I’d take the opportunity to get Splunk installed and check it out. In this first part, I’ll cover the installation process. This is one of the easiest installions I've ever written about--it's almost like I'm cheating or something. Installing Splunk My platform of choice for this article is Ubuntu, so I downloaded the 4.2.1 Debian package for 64-bit systems from the Splunk site. Installation is a one step breeze: dpkg –i /var/tmp/splunk-4.2.1-98165-linux-2.6-amd64.deb After installation (defaulting to /opt/splunk) start the Splunk server: /opt/splunk/bin/splunk start I had to accept the license agreement during the startup process. Afterwards, I was instructed to point my browser to http:<server>:8000. I logged in with the default credentials (admin / changeme) and then was instructed to change my password, which I did (you can skip this step if you prefer). Pretty easy path to an completed installation. The browser should now be in the state shown below in Figure 1. Installing Splunk for F5 Click on Manager in the upper right-hand corner of the screen, which should take you to the screen shown below in Figure 2. Next, click on Apps as shown below in Figure 3. At this point you have a choice. If you downloaded the Splunk for F5 app from splunkbase, you can click the “install app from file” button. I chose to install from the web, so I clicked the “find more apps online” button. This loaded a listing from splunkbase, with the Splunk for F5 app shown at the bottom of Figure 4 below. After clicking the “install Free” button, I had to enter my splunk.com credentials, then the application installed. Splunk requested a restart, so I restarted and then logged back in. My new session was returned to the online apps screen, so to get to my new F5 app, I clicked “back to search” in the upper left corner, which took my to the Search app home page. Finally, in the upper right corner I selected App and then clicked “Splunk for F5 Security”. This resulted in the screen show below in Figure 5. Success! Now…what to do with it? How is this useful? Check back for part two next week… For some hints, check out the blogs I mentioned at the top of this article from Pete and Lori: Spelunking for Big Data Do You Splunk 2.0 Other Related Articles Do you Splunk? ASM & Splunk integration - DevCentral - F5 DevCentral > Community ... F5 Networks Partner Spotlight - Splunk f5 ltm dashboard in splunk - DevCentral - F5 DevCentral ... Logging HTTP traffic to Splunk - DevCentral - F5 DevCentral ... Client IP Logging with F5 & Splunk - DevCentral - F5 DevCentral ...1.1KViews0likes0CommentsThese Are Not The Scrapes You're Looking For - Session Anomalies
In my first article in this series, I discussed web scraping -- what it is, why people do it, and why it could be harmful. My second article outlined the details of bot detection and how the ASM blocks against these pesky little creatures. This last article in the series of web scraping will focus on the final part of the ASM defense against web scraping: session opening anomalies and session transaction anomalies. These two detection modes are new in v11.3, so if you're using v11.2 or earlier, then you should upgrade and take advantage of these great new features! ASM Configuration In case you missed it in the bot detection article, here's a quick screenshot that shows the location and settings of the Session Opening and Session Transactions Anomaly in the ASM. You'll find all the fun when you navigate to Security > Application Security > Anomaly Detection > Web Scraping. There are three different settings in the ASM for Session Anomaly: Off, Alarm, and Alarm and Block. (Note: these settings are configured independently...they don't have to be set at the same value) Obviously, if Session Anomaly is set to "Off" then the ASM does not check for anomalies at all. The "Alarm" setting will detect anomalies and record attack data, but it will allow the client to continue accessing the website. The "Alarm and Block" setting will detect anomalies, record the attack data, and block the suspicious requests. Session Opening Anomaly The first detection and prevention mode we'll discuss is Session Opening Anomaly. But before we get too deep into this, let's review what a session is. From a simple perspective, a session begins when a client visits a website, and it ends when the client leaves the site (or the client exceeds the session timeout value). Most clients will visit a website, surf around some links on the site, find the information they need, and then leave. When clients don't follow a typical browsing pattern, it makes you wonder what they are up to and if they are one of the bad guys trying to scrape your site. That's where Session Opening Anomaly defense comes in! Session Opening Anomaly defense checks for lots of abnormal activities like clients that don't accept cookies or process JavaScript, clients that don't scrape by surfing internal links in the application, and clients that create a one-time session for each resource they consume. These one-time sessions lead scrapers to open a large number of new sessions in order to complete their job quickly. What's Considered A New Session? Since we are discussing session anomalies, I figured we should spend a few sentences on describing how the ASM differentiates between a new or ongoing session for each client request. Each new client is assigned a "TS cookie" and this cookie is used by the ASM to identify future requests from the client with a known, ongoing session. If the ASM receives a client request and the request does not contain a TS cookie, then the ASM knows the request is for a new session. This will prove very important when calculating the values needed to determine whether or not a client is scraping your site. Detection There are two different methods used by the ASM to detect these anomalies. The first method compares a calculated value to a predetermined ceiling value for newly opened sessions. The second method considers the rate of increase of newly opened sessions. We'll dig into all that in just a minute. But first, let's look at the criteria used for detecting these anomalies. As you can see from the screenshot above, there are three detection criteria the ASM uses...they are: Sessions opened per second increased by: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second increases by a given percentage. The default setting is 500 percent. Sessions opened per second reached: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second is greater than or equal to this number. The default value is 400 sessions opened per second. Minimum sessions opened per second threshold for detection: This specifies that the ASM considers traffic to be an attack if the number of sessions opened per second is greater than or equal to the number specified. In addition, at least one of the "Sessions opened per second increased by" or "Sessions opened per second reached" numbers must also be reached. If the number of sessions opened per second is lower than the specified number, the ASM does not consider this traffic to be an attack even if one of the "Sessions opened per second increased by" or "Sessions opened per second" reached numbers was reached. The default value for this setting is 200 sessions opened per second. In addition, the ASM maintains two variables for each client IP address: a one-minute running average of new session opening rate, and a one-hour running average of new session opening rate. Both of these variables are recalculated every second. Now that we have all the basic building blocks. let's look at how the ASM determines if a client is scraping your site. First Method: Predefined Ceiling Value This method uses the user-defined "minimum sessions opened per second threshold for detection" value and compares it to the one-minute running average. If the one-minute average is less than this number, then nothing else happens because the minimum threshold has not been met. But, if the one-minute average is higher than this number, the ASM goes on to compare the one-minute average to the user-defined "sessions opened per second reached" value. If the one-minute average is less than this value, nothing happens. But, if the one-minute average is higher than this value, the ASM will declare the client a web scraper. The following flowchart provides a pictorial representation of this process. Second Method: Rate of Increase The second detection method uses several variables to compare the rate of increase of newly opened sessions against user-defined variables. Like the first method, this method first checks to make sure the minimum sessions opened per second threshold is met before doing anything else. If the minimum threshold has been met, the ASM will perform a few more calculations to determine if the client is a web scraper or not. The "sessions opened per second increased by" value (percentage) is multiplied by the one-hour running average and this value is compared to the one-minute running average. If the one-minute average is greater, then the ASM declares the client a web scraper. If the one-minute average is lower, then nothing happens. The following matrix shows a few examples of this detection method. Keep in mind that the one-minute and one-hour averages are recalculated every second, so these values will be very dynamic. Prevention The ASM provides several policies to prevent session opening anomalies. It begins with the first method that you enable in this list. If the system finds this method not effective enough to stop the attack, it uses the next method that you enable in this list. The following screenshots show the different options available for prevention. The "Drop IP Addresses with bad reputation" is tied to Rate Limiting, so it will not appear as an option unless you enable Rate Limiting. Note that IP Address Intelligence must be licensed and enabled. This feature is licensed separately from the other ASM web scraping options. Here's a quick breakdown of what each of these prevention policies do for you: Client Side Integrity Defense: The system determines whether the client is a legal browser or an illegal script by sending a JavaScript challenge to each new session request from the detected IP address, and waiting for a response. The JavaScript challenge will typically involve some sort of computational challenge. Legal browsers will respond with a TS cookie while illegal scripts will not. The default for this feature is disabled. Rate Limiting: The goal of Rate Limiting is to keep the volume of new sessions at a "non-attack" level. The system will drop sessions from suspicious IP addresses after the system determines that the client is an illegal script. The default for this feature is also disabled. Drop IP Addresses with bad reputation: The system drops requests from IP addresses that have a bad reputation according to the system’s IP Address Intelligence database (shown above). The ASM will drop all request from any "bad" IP addresses even if they respond with a TS cookie. IP addresses that do not have a bad reputation also undergo rate limiting. The default for this option is disabled. Keep in mind that this option is available only after Rate Limiting is enabled. In addition, this option is only enforced if at least one of the IP Address Intelligence Categories is set to Alarm mode. Prevention Duration Now that we have detected session opening anomalies and mitigated them using our prevention options, we must figure out how long to apply the prevention measures. This is where the Prevention Duration comes in. This setting specifies the length of time that the system will prevent an attack. The system prevents attacks by rejecting requests from the attacking IP address. There are two settings for Prevention Duration: Unlimited: This specifies that after the system detects and stops an attack, it performs attack prevention until it detects the end of the attack. This is the default setting. Maximum <number of> seconds: This specifies that after the system detects and stops an attack, it performs attack prevention for the amount of time indicated unless the system detects the end of the attack earlier. So, to finish up our Session Opening Anomaly part of this article, I wanted to share a quick scenario. I was recently reading several articles from some of the web scrapers around the block, and I found one guy's solution to work around web scraping defense. Here's what he said: "Since the service conducted rate-limiting based on IP address, my solution was to put the code that hit their service into some client-side JavaScript, and then send the results back to my server from each of the clients. This way, the requests would appear to come from thousands of different places, since each client would presumably have their own unique IP address, and none of them would individually be going over the rate limit." This guy is really smart! And, this would work great against a web scraping defense that only offered a Rate Limiting feature. Here's the pop quiz question: If a user were to deploy this same tactic against the ASM, what would you do to catch this guy? I'm thinking you would need to set your minimum threshold at an appropriate level (this will ensure the ASM kicks into gear when all these sessions are opened) and then the "sessions opened per second" or the "sessions opened per second increased by" should take care of the rest for you. As always, it's important to learn what each setting does and then test it on your own environment for a period of time to ensure you have everything tuned correctly. And, don't forget to revisit your settings from time to time...you will probably need to change them as your network environment changes. Session Transactions Anomaly The second detection and prevention mode is Session Transactions Anomaly. This mode specifies how the ASM reacts when it detects a large number of transactions per session as well as a large increase of session transactions. Keep in mind that web scrapers are designed to extract content from your website as quickly and efficiently as possible. So, web scrapers normally perform many more transactions than a typical application client. Even if a web scraper found a way around all the other defenses we've discussed, the Session Transaction Anomaly defense should be able to catch it based on the sheer number of transactions it performs during a given session. The ASM detects this activity by counting the number of transactions per session and comparing that number to a total average of transactions from all sessions. The following screenshot shows the detection and prevention criteria for Session Transactions Anomaly. Detection How does the ASM detect all this bad behavior? Well, since it's trying to find clients that surf your site much more than other clients, it tracks the number of transactions per client session (note: the ASM will drop a session from the table if no transactions are performed for 15 minutes). It also tracks the average number of transactions for all current sessions (note: the ASM calculates the average transaction value every minute). It can use these two figures to compare a specific client session to a reasonable baseline and figure out if the client is performing too many transactions. The ASM can automatically figure out the number of transactions per client, but it needs some user-defined thresholds to conduct the appropriate comparisons. These thresholds are as follows: Session transactions increased by: This specifies that the system considers traffic to be an attack if the number of transactions per session increased by the percentage listed. The default setting is 500 percent. Session transactions reached: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number. The default value is 400 transactions. Minimum session transactions threshold for detection: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number, and at least one of the "Sessions transactions increased by" or "Session transactions reached" numbers was reached. If the number of transactions per session is lower than this number, the system does not consider this traffic to be an attack even if one of the "Session transactions increased by" or "Session transaction reached" numbers was reached. The default value is 200 transactions. The following table shows an example of how the ASM calculates transaction values (averages and individual sessions). We would expect that a given client session would perform about the same number of transactions as the overall average number of transactions per session. But, if one of the sessions is performing a significantly higher number of transactions than the average, then we start to get suspicious. You can see that session 1 and session 3 have transaction values higher than the average, but that only tells part of the story. We need to consider a few more things before we decide if this client is a web scraper or not. By the way, if the ASM knows that a given session is malicious, it does not use that session's transaction numbers when it calculates the average. Now, let's roll in the threshold values that we discussed above. If the ASM is going to declare a client as a web scraper using the session transaction anomaly defense, the session transactions must first reach the minimum threshold. Using our default minimum threshold value of 200, the only session that exceeded the minimum threshold is session 3 (250 > 200). All other sessions look good so far...keep in mind that these numbers will change as the client performs additional transactions during the session, so more sessions may be considered as their transaction numbers increase. Since we have our eye on session 3 at this point, it's time to look at our two methods of detecting an attack. The first detection method is a simple comparison of the total session transaction value to our user-defined "session transactions reached" threshold. If the total session transactions is larger than the threshold, the ASM will declare the client a web scraper. Our example would look like this: Is session 3 transaction value > threshold value (250 > 400)? No, so the ASM does not declare this client as a web scraper. The second detection method uses the "transactions increased by" value along with the average transaction value for all sessions. The ASM multiplies the average transaction value with the "transactions increased by" percentage to calculate the value needed for comparison. Our example would look like this: 90 * 500% = 450 transactions Is session 3 transaction value > result (250 > 450)? No, so the ASM does not declare this client as a web scraper. By the way, only one of these detection methods needs to be met for the ASM to declare the client as a web scraper. You should be able to see how the user-defined thresholds are used in these calculations and comparisons. So, it's important to raise or lower these values as you need for your environment. Prevention Duration In order to save you a bunch of time reading about prevention duration, I'll just say that the Session Transactions Anomaly prevention duration works the same as the Session Opening Anomaly prevention duration (Unlimited vs Maximum <number of> seconds). See, that was easy! Conclusion Thanks for spending some time reading about session anomalies and web scraping defense. The ASM does a great job of detecting and preventing web scrapers from taking your valuable information. One more thing...for an informative anomaly discussion on the DevCentral Security Forum, check out this conversation. If you have any questions about web scraping or ASM configurations, let me know...you can fill out the comment section below or you can contact the DevCentral team at https://devcentral.f5.com/s/community/contact-us.919Views2likes2CommentsNew Geolocation Capabilities in v10.1
With the BIG-IP GTM and 3-DNS products, location-based service has existed in the form of topology-based load balancing. The possibilities grow exponentially in v10.1, as you now have the capabilities in GTM and LTM. In this tech tip, we’ll discuss the iRules command access, the update procedures for keeping your data current, and some use cases for consideration. Introduction The level of detail has grown from continent and country on GTM to continent, country, state (or similar provincial boundaries outside the U.S), carrier, and organization, available on GTM and on LTM and other modules with iRules access. There are native GUI tie-ins in the GTM, but if using in iRules, the command usage is actually quite simple: [whereis [IP::client_addr]] – returns a list with continent, country, state, and city (not yet available) [whereis [IP::client_addr] continent] – returns the continent [whereis [IP::client_addr] country] – returns the continent [whereis [IP::client_addr] <state|abbrev>] – returns the state information as word or as two-letter abbreviation [whereis [IP::client_addr] isp] – returns the carrier [whereis [IP::client_addr] org] - returns the registered organization If looking up several levels, a quick performance check revealed that setting the list to a single variable and using lindex is 3x more efficient than using the whereis keywords. Note that unavailable data will be returned as an empty string for these fields. Updating Your Geolocation Data Previously, to update your location data required a TMOS version upgrade. Now, the updates are released frequently and made available at https://downloads.f5.com. Also, if you want to use the org keyword with the whereis command, you’ll need to download the update or you’ll get an error during execution. The F5 schema supports additional data fields that are documented on the wiki page that will be available in the future. You can experiment with the commands now—it will simply return an empty string or a zero. Sub state granularity requires data not included by default on BIG-IP. Contact F5 sales if you are interested in more detailed granularity. To get started, go out to the downloads site, select the latest zip file, download it, then extract locally and then upload the RPMs to the LTM (or upload the archive and use unzip on the BIG-IP). I extracted locally and used Putty’s pscp executable: C:\Users\rahm\Downloads>pscp geo*.rpm root@172.16.99.128:/var/tmp/ Using keyboard-interactive authentication. Password: geoip-data-ISP-1.0.0-2010 | 2398 kB | 2398.5 kB/s | ETA: 00:00:00 | 100% geoip-data-Org-1.0.0-2010 | 53565 kB | 6695.7 kB/s | ETA: 00:00:00 | 100% geoip-data-Region2-1.0.0- | 12708 kB | 4236.0 kB/s | ETA: 00:00:00 | 100% Now that you have the files local, you can install them: [root@localhost:Active] tmp # ls | grep geo.*.rpm geoip-data-ISP-1.0.0-20100201.28.0.i686.rpm geoip-data-Org-1.0.0-20100201.28.0.i686.rpm geoip-data-Region2-1.0.0-20100201.28.0.i686.rpm [root@localhost:Active] tmp # geoip_update_data geoip-data-ISP-1.0.0-20100201.28.0.i686.rpm [root@localhost:Active] tmp # geoip_update_data geoip-data-Org-1.0.0-20100201.28.0.i686.rpm [root@localhost:Active] tmp # geoip_update_data geoip-data-Region2-1.0.0-20100201.28.0.i686.rpm For further details on the usage of the geolocation database, reference Solution 11176, the TMOS Management Guide, and the Configuration Guide for GTM. All three documents require support logins. Use Cases There are many use cases for utilizing geolocation data, but we’ll look at just a few below. First, there are a couple things to note on the use of the geolocation data. Yep, we’re talking EULA. The data is purchased by F5 for use on BIG-IP systems and products for traffic management. The key to understanding EULA compliance is to figure out where the geolocation decision is being made. It is a direct violation of the EULA to use F5’s data to embed geolocation information or codes representing geolocation information into the requests such that another application or server could make the decision on what to do with that data. Customers wishing to use geolocation data on their webservers or in their applications to make decisions in those products should reach out their account team for guidance. F5’s traffic management products have a lot of power and flexibility and can make lots of decisions about traffic using the geolocation data on the BIG-IP. For example, a geolocation lookup can be used to route traffic requests to a different site, different server, different URL, or even substitute a different image, object, etc in the stream. The key is that the BIG-IP is making use of the data to make a decision to take some action. These are all allowed and in fact, intended usage of the geolocation data. Passing the data looked up to another system or displaying it back publicly is a violation of the basic data EULA. To summarize, all usage of the data must remain local to the system with the following two exceptions: Location can be placed in an encrypted cookie for reference ONLY by other BIG-IP devices Logging data can contain location info and collected into a central logging solution for analysis of F5 logs. Note that you can get a waiver of the EULA. In the near future, F5 expects to have expanded data sets available with less restrictions on the use cases. If you’re unsure, run your use case by your sales engineer. Localizing Content Many websites have the language options featured top-right, footer, or sometimes in the navigation itself. You can still make this option available (case in point, I was in Belgium on business a couple weeks ago but much prefer to read my sites in English, thank you) but now with LTM access you can auto-switch the content. when CLIENT_ACCEPTED { switch [whereis [IP::client_addr] country] { US { pool usa } CA { pool canada } MX { pool mexico } default { pool northamerica } } } Regulatory Compliance Perhaps you face restrictions on where your clients can access your application from. This iRule will prevent users not originating from Missouri or Illinois from accessing the application when CLIENT_ACCEPTED { if { !(([whereis [IP::client_addr] abbrev] equals "MO") or ([whereis [IP::client_addr] abbrev] equals "IL")) } { pool rejected } } Redirecting to Closer Geography This is one use case that solves some of the difficulties in global load balancing. With gslb, accuracy of geographic distribution relies on well-designed ldns infrastructure. I know several entities where all ldns is centralized, which leads to problematic distribution. Now, you can analyze at the local level and redirect accordingly. Assuming you built a data group with all the states split into regions (ie, “MO” := “midwestpool”), you could build a simple irule to make sure your clients use the datacenter nearest them. when CLIENT_ACCEPTED { set region [class match -value [whereis [IP::client_addr] abbrev] equals us_regions] if { $region ne "" } { switch $region { midwest { pool $region } east { HTTP::redirect http://my-east.application.com } south { HTTP::redirect http://my-south.application.com } west { HTTP::redirection http://my-west.application.com } } else { pool default } } Another way you might approach that: when HTTP_REQUEST { set region [class match -value [whereis [IP::client_addr] abbrev] equals us_regions] if { $region ne "" } { if { $region equals "midwest" } { pool $region } else { HTTP::redirect http://my-$region.application.com } } else { pool default } } *Note—all these iRules are conceptual in nature and are untested. Conclusion Accurate, updatable geolocation data, integrated into BIG-IP. We’re really just scratching the surface here. Exciting things to come, stay tuned. Get the Flash Player to see this player.899Views0likes8CommentsBIG-IP Logging and Reporting Toolkit - part one
Joe Malek, one of the many awesome engineers here at F5, took it upon himself to delve deeply into a very interesting but often unsung part of the BIG-IP advanced configuration world: logging and reporting. It’s my great pleasure to get to share with you his awesome study and the findings therein, along with (eventually) a toolkit to help you get started in the world of custom log manipulation. If you’ve ever questioned or been curious about your options when it comes to information gathering and reporting, this is definitely something you should read. There will be multiple parts, so stay tuned. This one is just the intro. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Description F5 products occupy critical positions in application delivery infrastructure. They serve as gateways, proxies, accelerators and traffic flow arbiters. In these roles customer expectations vary for the degree and amount of event information recorded. Several opportunities exist within our current product capabilities for our customers and partners to produce and consume log messages from and via F5 products. Efforts to date include generating W3C style log messages on LTM via iRules, close integration with leading vendors and ASM (requires askf5 login), and creating relationships with leading vendors to best serve our customers. Significant capabilities exist for customers and partners to create their own logging and reporting solutions. Problems and opportunity In the many products offered by F5, there exists a variety of logging structures. The common log protocols used to emit messages by F5 products are Syslog (requires askf5 login) and SNMP (requires askf5 login), along with built-in iRulescapabilities. Though syslog-ng is commonplace, software components tend to vary in transport, verbosity, message formatting and sometimes syslog facility. This can result in a high degree of data density in our logs, and messages our systems emit can vary from version to version.[i] The combination of these factors results in a challenge that requires a coordinated solution for customers who are compelled by regulation, industry practice, or by business process, to maintain log management infrastructure that consumes messages from F5 devices.[ii] By utilizing the unique product architecture TMOS employs by sharing its knowledge about networks and applications as well as capabilities built into iRules, TMOS can provide much of this information to log management infrastructure in a simple and knowledgeable manner. In effect, we can emit messages about appliance state and offload many message logging tasks from application servers. Based on our connection knowledge we can also improve the utility and value of information obtained from vendor provided log management infrastructure.[iii] Objectives and success criteria The success criteria for including an item in the toolkit is: 1. A capability to deliver reports on select items using the leading platforms without requiring core development work on an F5 product. 2. An identified extensibility capability for future customization and report building. Assumptions and dependencies Vendors to include in the toolkit are Splunk, Q1Labs and PresiNET ASM logging and reporting is sufficient and does not need further explanation Information to be included in sample reports should begin to assist in diagnostic activities, demonstrate ROI by including ROI in an infrastructure and advise on when F5 devices are nearing capacity Vendor products must be able to accept event data emitted by F5 products. This means that some vendors might have more comprehensive support than others. Products currently supported but not in active development are not eligible for inclusion in the toolkit. Examples are older versions of BIG-IP and FirePass, and all WANJet releases. Some vendor products will require code modifications on the vendor’s side to understand the data F5 products send them. [i] As a piece of customer evidence, Microsoft implemented several logging practices around version 9.1. When they upgraded to version 9.4 their log volume increased several-fold because F5 added log messages and changed existing messages. As a result existing message taxonomy needed to be deprecated and we caused them to need to redesign filters, reports and create a new set of logging practices. [ii] Regulations such as the Sarbanes-Oxley Act, Gramm Leach Blyley Act, Federal Information Security Management Act, PCI DSS, and HIPPA. [iii] It is common for F5 products to manipulate connections via OneConnect, NATs and SNATs. These operations are unknown to external log collectors, and pose a challenge when assembling a complete view of the network connections between a client and a server via an F5 device for a single application transaction. What’s Next? In the next installment we’ll get into the details of the different vendors in question, their offerings, how they work and integrate with BIG-IP, and more. Logging and Reporting Toolkit Series: Part Two | Part Three721Views0likes1CommentBIG-IP Logging and Reporting Toolkit – part four
So far we’ve covered the initial problem, the players involved and one in-depth analysis of one of the options (splunk). Next let’s dig into Q1labs’ Qradar offering. The first thing you’ll need to do, just like last time, is make sure your BIG-IP to pass syslog traffic off the box. Here’s a simple example of how you can get that done in your config file. These are the same as last time, so nothing shockingly new here, though this bit is important. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Bigip v9 syslog { remote server 10.10.200.31 } Bigip v10 syslog { remote server { qradar { host 10.11.100.31 } } } This will send all syslog messages from the BIG-IP to the QRadar system; both BIG-IP system messages and any messages from iRules. If you’re interested in having iRules log to the QRadar system directly you can use the HSL statements or the log statements with a destination host defined. Ex) RULE_INIT has set ::QRadarHost “10.10.200.31” and then in the iRules event you’re interested in you assemble $log_message and then sent it to the log with log $::QRadarHost $log_message . A good practice would be to also record it locally on something like local0 incase the message doesn’t make it to the QRadar system. In my testing I used a single QRadar system running the log collector and event processor. If you’re using a more sophisticated deployment you’ll need to use the Deployment Manager to ensure that the QRadar log collectors are forwarding messages onto the Event Processor you’re going to work with. My QRadar system was already setup to receive syslog messages on port 514, so there wasn’t anything more to do to get messages flowing. The key to working with QRadar is defining regular expressions to extract the message data you’re interested in – once you have that done most things are done using the same process. In this section I’ll walk through all the tasks needed to extract custom data through build a report for the w3c case. Then I’ll show a summary using NEDS and dashboard data. Here are my regexes for QRadar for w3c, NEDS and the dashboard data script: message source attribute name regex capture group sample message dashboard script Compression Deflate uses deflate\.out\.uses='(\d+)' 1 in dc post dashboard script Compression LZO uses lzo\.out\.uses='(\d+)' 1 in dc post dashboard script Compression Null uses null\.out\.uses='(\d+)' 1 in dc post dashboard script Dashboard-messageType message_type='(.+?)' 1 in dc post dashboard script Dashboard-reportingSystem HostName='(.+?)' 1 in dc post dashboard script Dashboard-routingEnabled routing='(.+?)' 1 in dc post NEDS iRule NEDSv1-Flow-clientside-http "(neds\.f5\.conn\.start\.v1)",(\"[\w\.resp\.v1]+\"\,)+\"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}\-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\" 1 in NEDS Spec NEDS iRule NEDSv1-clientIPaddress "(neds\.f5\.conn\.start\.v1","[\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) 2 in NEDS Spec NEDS iRule NEDSv1-clientPort "(neds\.f5\.conn\.start\.v1","[\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:)(\d{1,5}) 3 in NEDS Spec NEDS iRule NEDSv1-clientCloseBytesIn (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+),(\d+),(\d+) 7 in NEDS Spec NEDS iRule NEDSv1-clientCloseBytesOut (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+),(\d+),(\d+),(\d+) 8 in NEDS Spec NEDS iRule NEDSv1-clientClosePktsIn (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+), 5 in NEDS Spec NEDS iRule NEDSv1-clientClosePktsOut (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+),(\d+) 6 in NEDS Spec NEDS iRule NEDSv1-clientCloseTimestamp (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+), 4 in NEDS Spec NEDS iRule NEDSv1-clientConnectionIngressVlan (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+)\,"(\w+)" 4 in NEDS Spec NEDS iRule NEDSv1-clientConnectionPolicyName (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+)\,"(\w+)"\,(\d+),(\d+),(\d+),\"([\w\.]+)\" 8 in NEDS Spec NEDS iRule NEDSv1-clientConnectionStartTimestamp (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+) 3 in NEDS Spec NEDS iRule NEDSv1-clientIPProtocol (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+)\,"(\w+)",(\d+), 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestHost (neds\.f5\.http\.req\.v1)",("[\w\.\"\:\-\@]+)","([\w\.\:\-\@]+)",(\d+\.\d+,\d+),"([\w\.\_\-]+) 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestServerPort (neds\.f5\.http\.resp\.v1)","([\w\.]+)","([\d\.:]+)-([\d\.]+):(\d{1,5}) 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestTCPReplyNumber (neds\.f5\.http\.req\.v1)",("[\w\.\"\:\-\@]+)","([\w\.\:\-\@]+)",(\d+\.\d+),(\d+) 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestUserAgent (neds\.f5\.http\.req\.v1)",("[\w\.\"\,\:\-\@]+)","([\w/\._\%\@]+)",("[\w\@\.]*?"),"([\w/\.\s(;\-\:\)]+) 5 in NEDS Spec NEDS iRule NEDSv1-httpResponseContentLength (neds\.f5\.http\.resp\.v1)","([\w\."\,\:\-\@]+)","([\w\/\;\s\=\-]+)","(\d+) 4 in NEDS Spec NEDS iRule NEDSv1-httpResponseContentType (neds\.f5\.http\.resp\.v1)","([\w\."\,\:\-\@]+)","([\w\/\;\s\=\-]+) 3 in NEDS Spec NEDS iRule NEDSv1-httpResponseLBTarget (neds\.f5\.http\.resp\.v1)","([\w\.\,:\-@/;\s\="]+),"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5})" 3 in NEDS Spec NEDS iRule NEDSv1-reportingSystem (\"neds.+[\w]\.v1\"),\"([\w.]+)\" 2 in NEDS Spec NEDS iRule NEDSv1-responseHTTPContentLength (neds[\w\.]+resp\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+),(\d+),"(\d{3})","([\w\/]+)","(\d+)" 7 in NEDS Spec NEDS iRule NEDSv1-responseHTTPServerResponseCode (neds[\w\.]+resp\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+),(\d+),"(\d{3})" 5 in NEDS Spec w3c iRule W3C Client IP address client_ip=(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) 1 in dc post w3c iRule W3C Client Port client_port=(\d{1,5}) 1 in dc post w3c iRule W3C Client username username=([\w]+) 1 in dc post w3c iRule W3C Content Length content_length=(\d+) 1 in dc post w3c iRule W3C HTTP Request request="(.*)"\ss 1 in dc post w3c iRule W3C HTTP version HTTP/(\d\.\d)" 1 in dc post w3c iRule W3C Host header host=(.+?) 1 in dc post w3c iRule W3C Member server lb_server=(\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}:\d{1,5}) 1 in dc post w3c iRule W3C Server Response Code server_status=(\d{3}) 1 in dc post w3c iRule W3C User Agent user_agent="(.*)" 1 in dc post w3c iRule W3C VIrtual Server name virtual=(.*?)\s 1 in dc post w3c iRule w3c Server Port lb_server=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:(\d{1,5}) 1 in dc post w3c iRule w3c referer referer=([\w\./\:\-]+) 1 in dc post w3c iRule w3c resp_time resp_time=(\d+) 1 in dc post W3C offload case Now that BIG-IP is setup to send messages to the QRadar system double check to see that you’ve installed the w3c-client-logging iRule on a vip and we’ll see what it looks like when everything is put together. Login to your QRadar management console and navigate to the Events tab. You should see events streaming into QRadar if everything is configured correctly. If you can’t find what you’re looking for in the normalized events view change it to the raw view – I also opted to have my console autorefresh every minute. There last message is the one I’m looking for. If you find a similar message and double click on it we can start to extract data from the message and build up some searches and reports. I’ve not fully customized my QRadar deployment, so I’m ignoring the fact that the Log Source for the iRule messages has been identified by the system’s FastIronDsm. After your screen is showing the Event Viewer click on the Extract Property button in the button bar. This will launch the Custom Event Property Definition tool, which will allow you to categorize event elements and write the regex for extracting the information you’re interested in. Right now, I’m interested in the HTTP server status codes. For your custom field extractions, here’s a sample message from the W3C iRule: Feb 9 14:23:21 tmm tmm[5088]: Rule w3c-client-logging : virtual=www.f5demo.com_http client_ip=65.197.145.92 client_port=37227 lb_server=10.10.200.1:80 host=www.f5demo.com username= request="GET /compression HTTP/1.0" server_status=301 content_length=322 resp_time=1 user_agent="check_http/1.96 (nagios-plugins 1.4.5)]" referer= The regular expressions for key value pairings are pretty easy to create. In this window you can see that the regex has located the item in the log message I’m interested in – it’s highlighted in yellow. Save the regex extraction and you’ll be returned to the Event Viewer and look for the new property listed on the page. Now the attribute shows up on the Event list page, down at the bottom. I’ve already entered several regular expressions for the NEDS data, and since I’ve assigned them all to the same Device Support Module (DSM) they’re showing up on this page; and this isn’t a NEDS message so they’re not applicable to this stream. With the extraction we just assigned to this DSM and log source we can return to the Event List and build a search. After the search is built it can be used to filter the events list, and we can build a report from it. In the Event Viewer Click on the Search button and select the New Event Search option. I’ve also added extractions to my system for response time, and member server. This helps to further illustrate what’s happening in my environment – I can see what hosts are sending which response codes and get a rough idea on what the client and server performance is. Pick the fields you’re interested in including in the search and click on the ‘Filter’ button. QRadar composes the search and saves it to the list of defined searches. I could also add a regex to extract the BIG-IP name from the message and group by that attribute to get an idea of what’s happening across the various BIG-IPs in my environment. Now the search runs – and I find that in the last 6 hours there have been 147 HTTP 304 response codes recorded by the system. Here’s my search result: /p> I see that in the last 6 hours there have been 147 HTTP 304’s recorded by the system. To turn this search into a report or make it available to the dashboard, click on the Save Criteria button in the toolbar. I’ve found that it helps to group searches together, so I’ve created a group called BIG-IP for all my BIG-IP related searches. For this search to appear on your dashboard you’ll also need to click the “Add item…” button on the dashboard and locate your search. To generate the report, click the Reports tab and find the Actions dropdown in the tool bar – select Create. I’m building a manual report for this step. My report uses a single frame and Events/Logs as the information source. And here’s my report: Accounting for the date format, there were a lot of 304’s returned to clients on March 5 th and I probably have data missing from March 10 th onwards because my BIG-IP was sending log messages somewhere else. NEDS case While the w3c offload case used an iRule with key/value tuples NEDS uses a comma delimited string to convey information in the message. I spent some time with the specification and wrote several regular expressions to extract the data. The process is identical to what’s outlined in the w3c case, so I’ll save the screen real estate and skip the screen shots of the process. You can find my regular expressions here – I’m fairly new to regular expressions, so I’m sure that there are improvements that can be made to make mine more efficient/maintainable. After defining the custom attributes here’s what I get when viewing a NEDS message in the Event Viewer. To syslog, A NEDS message looks like Mar 30 10:44:59 tmm tmm[5088]: Rule networkEventDataStream <HTTP_RESPONSE>: "neds.f5.http.resp.v1","bigip9.f5demo.com","65.197.145.92:42709-65.197.145.93:80@1269971099.951082",1269971099.952527,1,"301","text/html; charset=iso-8859-1","322","10.10.200.1:80","65.197.145.92:42709-10.10.200.1:80" Here’s the detailed view using the regular expressions for http response, client close, http request and client accepted messages. HTTP Response Client close HTTP Request Client Accepted < Here’s a select result of a search I composed for the connection close data. The clientCloseBytesOut is not N/A filters out all the non-client-close NEDS messages. The process to generate a report for NEDS data mirrors the process for the W3C case: 1. Save the search 2. Create a report template 3. Add the search to the report template 4. Save the template 5. Run the report Dashboard data case To get the dashboard data streaming into QRadar, I had to modify my base script to send the messages via syslog, instead of just printing the string. In addition to the QRadar and BIG-IP systems, you’ll need another host with the requisite Perl modules installed to relay the data from the BIG-IP to the QRadar. Here’s the script: Dashboard Syslog Pearl Script To use it you’ll need to: 1. On line 66 configure the username the script should use to access the dashboard interface 2. On line 67 configure the password for the username from step 1 3. On line 48 configure the IP address or name that for the QRadar event collector 4. Schedule the script to run periodically on your relay host – cron would do nicely. Once you’ve got the data into QRadar you’ll need to: 1. Find the endpoint-isession-stat data log message and write your regular expressions for the data you’re interested in 2. Find the remote endpoint log message you’re interested in and write your regular expressions for the data you’re interested in 3. Build and save your searches 4. Build and run your report template Here’s a sample endpoint-isession-stat data log message in syslog: Mar 15 17:05:46 127.0.0.1 10.11.100.73: device_timestamp='Wed Mar 15 00:05:46 2010 GMT' HostName='bigip3900c.demo.f5demo.com' version.version='10.1.0' message_type='endpoint_isession_stat' name='_tunnel_ctrl_10.20.50.103' peer_ref='00:00:00:00:00:00:00:00:00:00:ff:ff:0a:14:32:67' null.in.uses='292670' null.in.errors='0' null.in.bytes_opt='31371493' null.in.bytes_raw='29030133' null.out.uses='292670' null.out.errors='0' null.out.bytes_opt='31371493' null.out.bytes_raw='29030133' lzo.in.uses='1250' lzo.in.errors='0' lzo.in.bytes_opt='139167' lzo.in.bytes_raw='124178' lzo.out.uses='1250' lzo.out.errors='0' lzo.out.bytes_opt='139166' lzo.out.bytes_raw='124177' deflate.in.uses='0' deflate.in.errors='0' deflate.in.bytes_opt='0' deflate.in.bytes_raw='0' deflate.out.uses='0' deflate.out.errors='0' deflate.out.bytes_opt='0' deflate.out.bytes_raw='0' dedup.in.uses='0' dedup.in.errors='0' dedup.in.bytes_opt='0' dedup.in.bytes_raw='0' dedup.out.uses='0' dedup.out.errors='0' dedup.out.bytes_opt='0' dedup.out.bytes_raw='0' dedup_in.hit_bytes='0' dedup_in.hits='0' dedup_in.hit_hist.bucket_1k='0' dedup_in.hit_hist.bucket_2k='0' dedup_in.hit_hist.bucket_4k='0' dedup_in.hit_hist.bucket_8k='0' dedup_in.hit_hist.bucket_16k='0' dedup_in.hit_hist.bucket_32k='0' dedup_in.hit_hist.bucket_64k='0' dedup_in.hit_hist.bucket_128k='0' dedup_in.hit_hist.bucket_256k='0' dedup_in.hit_hist.bucket_512k='0' dedup_in.hit_hist.bucket_1m='0' dedup_in.hit_hist.bucket_large='0' dedup_in.miss_bytes='0' dedup_in.misses='0' dedup_in.miss_hist.bucket_1k='0' dedup_in.miss_hist.bucket_2k='0' dedup_in.miss_hist.bucket_4k='0' dedup_in.miss_hist.bucket_8k='0' dedup_in.miss_hist.bucket_16k='0' dedup_in.miss_hist.bucket_32k='0' dedup_in.miss_hist.bucket_64k='0' dedup_in.miss_hist.bucket_128k='0' dedup_in.miss_hist.bucket_256k='0' dedup_in.miss_hist.bucket_512k='0' dedup_in.miss_hist.bucket_1m='0' dedup_in.miss_hist.bucket_large='0' dedup_out.hit_bytes='0' dedup_out.hits='0' dedup_out.hit_hist.bucket_1k='0' dedup_out.hit_hist.bucket_2k='0' dedup_out.hit_hist.bucket_4k='0' dedup_out.hit_hist.bucket_8k='0' dedup_out.hit_hist.bucket_16k='0' dedup_out.hit_hist.bucket_32k='0' dedup_out.hit_hist.bucket_64k='0' dedup_out.hit_hist.bucket_128k='0' dedup_out.hit_hist.bucket_256k='0' dedup_out.hit_hist.bucket_512k='0' dedup_out.hit_hist.bucket_1m='0' dedup_out.hit_hist.bucket_large='0' dedup_out.miss_bytes='0' dedup_out.misses='0' dedup_out.miss_hist.bucket_1k='0' dedup_out.miss_hist.bucket_2k='0' dedup_out.miss_hist.bucket_4k='0' dedup_out.miss_hist.bucket_8k='0' dedup_out.miss_hist.bucket_16k='0' dedup_out.miss_hist.bucket_32k='0' dedup_out.miss_hist.bucket_64k='0' dedup_out.miss_hist.bucket_128k='0' dedup_out.miss_hist.bucket_256k='0' dedup_out.miss_hist.bucket_512k='0' dedup_out.miss_hist.bucket_1m='0' dedup_out.miss_hist.bucket_large='0' outgoing.conns_idle_cur='0' outgoing.conns_idle_max='0' outgoing.conns_idle_tot='0' outgoing.conns_active_cur='2' outgoing.conns_active_max='3' outgoing.conns_active_tot='3' outgoing.conns_errors='0' outgoing.conns_passthru_tot='0' incoming.conns_idle_cur='0' incoming.conns_idle_max='0' incoming.conns_idle_tot='0' incoming.conns_active_cur='2' incoming.conns_active_max='6' incoming.conns_active_tot='127' incoming.conns_errors='0' incoming.conns_passthru_tot='0' dedup_status_array='cccc ' Here’s a sample remote endpoint log message in syslog: Mar 15 17:05:50 127.0.0.1 10.11.100.73: device_timestamp='Wed Mar 15 00:05:46 2010 GMT' HostName='bigip3900c.demo.f5demo.com' version.version='10.1.0' message_type='woc_peer' peer_ref='10.20.50.103' name='bigip3900b.demo.f5demo.com' UUID='cd18:4840:f9e0:' mgmt_addr='10.11.100.72' version='10.1.0' dedup_cache='203588' dedup_action='DEDUP_ACTION_NONE' dedup_cache_refresh_flag='false' dedup_cache_refresh_count='0' state='WOC_PEER_STATE_READY' is_enabled='true' origin='MCP_ORIGIN_CONFIGURED' profile_serverssl='' tunnel_encrypt_data='true' tunnel_port='443' behind_nat='false' source_address='WOC_PEER_NAT_SOURCE_ADDRESS_NONE' config_status='none' routing='true' addr_list='' And lastly if you’re an ASM or APM user there’s an F5 Networks DSM that will recognize your ASM logs; all you need to do is define the hostname or IP address of your QRadar system on your ASM logging profile.608Views0likes1CommentF5 Security on Owasp Top 10: Injections
->Part of the F5/Owasp Top Ten Series At the top of the Owasp list is Injections. Their definition is “Injection flaws, such as SQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. “ Long story short, it’s is allowing unsanitized input into a program field that has the potential for execution. (which is darn near everywhere these days) Everyone knows the story of little bobby: In all honesty, I thought that Bobby’s Mom had a very valid point. Let’s Inject: Basic injection attacks are fairly simple to perform. We find an input parameter and try to send it something nefarious. In my labs, I’ve got a nice little auction site up and running. Everyone loves an auction. To bid on items, of course, you need to be authenticated. Well, being the evilHacker, I want to get in without using my credentials. This is where the injection comes in. Using either passive intelligence gathering or just guessing due to the common usages, I decide to try a simple SQL-Injection attack: The input we are injecting into is the USERNAME field: Username: ‘ or 1=1 # Pre-Injection: Post Injection: Huh… a logged in user of ‘ or 1=1 #? Rut Row Shaggy! So what is going on here? Lets look at the code at play: <php yadda yadda yadda $query = "select id from users where nick='$username' and password='".md5($MD5_PREFIX.$password)."' and suspended=0"; It says: Find me the user who’s username and password matches the input (username, plus some MD5 fun on the password) AND whose account is not suspended. How nice. So what evilHacker did was make that simple query say: $query = "select id from users where nick=`‘ or 1=1 # and password='".md5($MD5_PREFIX.$password)."' and suspended=0"; Now it says: Find me the user <no one> or 1=1 (1=1 is a truth statement). In essence you get a select all records that exist in the table users. Not a very strong front door eh? Let’s fix Wouldn’t it be nice if they could just fix it at the code level and be done with it? Well, this one they can (fairly simple escaping of characters). But we all know is reality, most code changes require scrums, waterfalls, validations, testing, and a flood of tears. In our case, we already have the Virtual server for this website on the LTM/ASM (Virtual Edition 11.1). It’s a few steps to get the ASM in place to defend: 1. Create the HTTP Class for the Virtual Server: A. Local Traffic –> Virtual Servers –> Profiles –>Protocol –> HTTP Class –>Create B. Give it a name and select Enabled for the “Application Security” 2. Go to Application Security –>Security Policies. Select “Configure Security Policy” 3. For our case, we are going to “Create a Policy Manually” 4. Set the Policy Language. Here we are using UTF-8. 5. Select the Signature Lists you want to use. For our site, we are going to run with the defaults and for the sake of the demonstration, I turned off staging. This way, we get immediate blocks (yay satisfaction!) 6. Now we configure the Wildcard Tightening. It will put a wildcard in place so that we can have a chance to learn parameters as they are used. Hit next and finished. 7. Now, we apply that HTTP Class to the virtual server. This hooks it into the asm. A. Local Traffic –> Virtual Servers –> Your VS – >Resources –>HTTPClass Profiles B. Add your profile and update 8. Now we are passing traffic through the ASM, I hit the page and log in. In the ASM Learning section, I now see the parameters for username and password. A. Application Security –>Policy Building –>Manual –> Traffic Learning. Click on Parameters and we see them. B. For now, I hit accept all. 9. Now the parameters are in the policy, the policy is listening transparently. I want to take the out of staging, so I get immediate blocks. I go to Policy –> Parameters –> Parameter List and select each parameter I want to remove from staging. Here, I do password and username. 10. Now, I want to put the policy in blocking mode, to see this bad boy in action. A. Click Policy B Set Enforcement Mode to Blocking C. Profit? Or hit Save, then apply policy in the top right Now: Pre-Injection: Post Injection: Why the Block? Now the coolest part, we, as the admins, can see why the block happened. We go to Application Security –> Reporting –>Requests. Put the Support ID into the filter. It returns the full request, why it was blocked, and the options to learn it as a false positive. Pretty cool huh? This is only the tip of the iceberg for what fun we can have with the ASM. Part of the F5/Owasp Top Ten Series548Views0likes2CommentsDevCentral Architecture
Everyone has surely (don’t call me Shirley!) at least been exposed to THE CLOUD by now. Whether it’s the—I’ll go with interesting—“to the cloud!” commercials or down in the nuts and bolts of hypervisors and programmatic interfaces for automation, the buzz has been around for a while. One of F5’s own, cloud computing expert and blogger extraordinaire (among many other talents) Lori MacVittie, weighs in consistently on the happenings and positioning in the cloud computing space. F5 has some wicked smart talent with expertise in the cloud and dynamic datacenter spaces, and we make products perfectly positioned for both worlds. With the release of all our product modules on BIG-IP VE last year, it presented the opportunity for the DevCentral team to elevate ourselves from evangelists of our great products to customers as well. And with that opportunity, we drove the DevCentral bull onward to our new virtual datacenters at Bluelock. Proof of Concept We talked to a couple different vendors during the selection period for a cloud provider. We selected Bluelock for a couple major reasons. First, their influential leadership in the cloud space by way of CTO Pat O’Day. Second, their strong partnership with fellow partner VMware, and their use of VMware’s vCloud Director platform. This was a good fit for us, as our production BIG-IP VE products are built for the ESX hypervisors (and others in limited configurations, please reference the supported hypervisors matrix). As part of the selection process, Bluelock set up a temporary virtual datacenter for us to experiment with. Our initial goal was just to get the application working with minimal infrastructure and test the application performance. The biggest concerns going in were related to the database performance in a virtual server as the DevCentral application platform, DotNetNuke, is heavy on queries. The most difficult thing in getting the application up was getting files into the environment. Once we got the files in place and the BIG-IP VE licensed, we were up and running in less than a day. We took captures, analyzed stats, and with literally no tuning, the application was performing within 10% of our production baseline on dedicated server/infrastructure iron. It was an eye-opening success. Preparations Proof of concept done, contracts negotiated and done, and a few months down the road, we began preparing for the move. In the proof of concept, LTM was the sole product in use. However, in the existing production environment, we had the LTM, ASM, GTM, and Web Accelerator. In the new production environment at Bluelock, we added APM for secure remote access to the environment, and WOM to secure the traffic between our two virtual datacenters. The list of moving parts: Change to LTM VE from LTM (w/ ASM module); version upgrade Change to GTM VE from GTM; version upgrade Change to Edge Gateway VE from Web Accelerator; version upgrade Introduce APM (via Edge Gateway VE) Introduce WOM (via Edge Gateway VE) Application server upgrade New monitoring processes iRules rewrites and updates to take advantage of new features and changes dns duties There were many many other things we addressed along the way, but these were the big ones. It wasn’t just a physical –> virtual change. The end result was a far different animal than we began with. Networking In the vCloud Director environment, there are a few different network types: external networks, organizational networks, and vApp networks, and a few sub-types as well. I’ll leave it to the reader to study all the differences with the platform. We chose to utilize org networks so we could route between vApps with minimal no additional configuration. We ended up with several networks defined for our organizations, including networks for public access, high availability, config sync, and mirroring, and others for internal routing purposes. The meat of the infrastructure is shown in the diagram below. Client Flow One of the design goals for the new environment was to optimize the flow of traffic through the infrastructure, as well as provide for multiple external paths in the event of network or device failures. Web Accelerator could have been licensed with ASM on the BIG-IP LTM VE, but the existing versions do not yet support CMP on VE, so we opted to keep them apart. SSL is terminated on the external vips, and then the ASM policy is applied to a non-routable vip on the LTM, utilizing iRules to implement the vip targeting vip solution. This was done primarily to support the iRules we run to support our application traffic without requiring major modifications that would be necessary to run on a virtual server with a plugin (ASM,APM,WA,etc) applied. If you’re wondering about the performance hit of the vip targeting vip solution, (you know you are!) in our testing, as long as the front and backside vips are the same type, we found the difference between vip->vip and just a single vip to be negligible, in most cases less than a tenth of a percentage point. YMMV depending on your scenario. You might also be wondering about terminating SSL on BIG-IP VE. There is no magic here. It’s simple math. In our environment, the handshakes per second are not a concern for 2k keys on our implementation, but if you need the dedicated compute power for SSL offload, you can still go with a hybrid deployment, using hardware up-front for the heavy lifting and VE for the application intelligence. Zero-Hands Expansion One of the cooler things about living in a virtual datacenter is when it comes time for expansion. We originally deployed our secondary datacenter with less gear and availability, but additional production and development projects warranted more protections for failure. Rather than requiring a lengthy equipment procurement process, it took a few emails to quote for new resources, a few emails for approvals, and the time to plan the project and execute. The total hours required to convert our standalone infrastructure in our secondary vDC to an HA environment and add an application server to boot came in just under thirty, with no visits required to any datacenter with boxes, dollies, cables, screw drivers, and muscles in tow. Super slick. Conclusion F5 BIG-IP VE and the Bluelock Virtual Datacenter is a match made in application paradise. If you have any questions regarding the infrastructure, the deployment process, why the sky is blue, please post below.502Views0likes0CommentsHeatmaps, iRules Style: Part2
Last week I talked about generating a heat map 100% via iRules, thanks to the geolocation magic in LTM systems, and the good people over at Google letting us use their charting API. This was an outstanding way to visualize the the traffic coming to your application. For those interested in metrics it provides a great way to see this data in a visually pleasing manner. That said, it was pretty basic. All it showed was the United States which, for anyone that has used the internet much, is obviously not representative of the entire web. To be truly useful we’ll need to show the entire world. That’s simple enough. We’ll update the region the map we’re drawing zooms to, so it will look more like this: Let’s take a look at how this is going to work. Since we were collecting data based on state abbreviations before, we’ll need to first switch that up to use country codes instead. We’ll then change up our Google call so that we’re setting the range covered by the map to the entire world, rather than just the US. While we’re at it, let’s change the name of the subtable we’re using from states to countries, just to keep things more clear. What we end up with is some code that looks very familiar, if you’ve already seen last week’s solution, with a few minor changes: set chld "" set chd "" foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } set chd [string trimright $chd ","] HTTP::respond 200 content "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=world&chd=t:$chd&chld=$chld&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br>br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" So using that in place of the similar logic in last week’s solution you can get a simple world view of the traffic passing through your site. That’s great and all, but what if you can’t see the detail you’re looking for? What if you want to see the details of Asia’s traffic and be able to decipher the patterns in Japan and the Middle East? What we really need is to build a simple interface to make this more of an application, and less of a single image displayed on a web page. Well, first of all, we already have all the data collected that we’ll need, if you think about it. We’re already tracking the requests per country, so all we need to do is build out options to allow for users to click a link and zoom to a different region of the map. To do this we’ll set up some simple HTML navigation links at the bottom of the page being generated via the iRule, and set up a switch structure to handle each URI the links pass back into the iRule, and use those to format the HTML appropriately so that we get the right Google charts call. That sounds more complicated than it is. Here’s what it looks like: "/heatmap" { set chld "" set chd "" foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } set chd [string trimright $chd ","] HTTP::respond 200 content "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=world&chd=t:$chd&chld=$chld&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br><br>Zoom to region: <a href='/asia'>Asia</a> | <a href='/africa'>Africa</a> | <a href='/europe'>Europe</a> | <a href='/middle_east'>Middle East</a> | <a href='/south_america'>South America</a> | <a href='/usa'>United States</a> | <a href='/heatmap'>World</a><br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" "/asia" { set chld "" set chd "" foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } set chd [string trimright $chd ","] HTTP::respond 200 content "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=asia&chd=t:$chd&chld=$chld&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br><br>Zoom to region: <a href='/asia'>Asia</a> | <a href='/africa'>Africa</a> | <a href='/europe'>Europe</a> | <a href='/middle_east'>Middle East</a> | <a href='/south_america'>South America</a> | <a href='/usa'>United States</a> | <a href='/heatmap'>World</a><br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" … That section can be repeated once for each available region that Google will let us view (Asia | Africa | Europe | Middle East | South America | United States | World). That then gives us something that looks like this: As you can see, we now have a world view map that shows the heat of each country, and we have individual links that we can click on along the bottom to take us to a zoom of each country/region to get a more specific look at the info there. As an example, let’s take a look at the data from Asia: So we now have a nice little heatmapping application. It pulls up a world view of app traffic going to your site or app, it allows you to click around to the different regions of the world to get a more detailed view, and it even lets you re-set the data at will. I can hear some among you asking “What about the states, though?”. If I take away the state view of the US and give a world view, then I’m really trading one limitation for another. Ideally we’d be able to see both, right? If I want to be able to give a detailed view on both the countries around the world and the states within the US, then I need to expand my data collection a bit. I need to collect both country codes for incoming requests and state abbreviations, where applicable. This means creating a second sub-table within the iRule, and issuing a second whereis per request coming in. Something like this should do: set cloc [whereis [IP::client_addr] country] set sloc [whereis [IP::client_addr] abbrev] if {[table incr -subtable countries -mustexist $cloc] eq ""} { table set -subtable countries $cloc 1 indefinite indefinite } if {[table incr -subtable states -mustexist $sloc] eq ""} { table set -subtable states $sloc 1 indefinite indefinite } Above we’re using the cloc (country location) and sloc (state location) variables to simultaneously track both country codes and state abbreviations in separate sub tables within the iRule. This way we don’t mix up CA (Canada) and CA (California) or similar crossovers and throw our counts off. When doing this, don’t forget to update the resetmap case as well to empty both sub tables, not just one. This also means that we’ll need to slightly change the logic in the “usa” case as opposed to all of the other cases when doing a lookup. If the user wants to view the USA details, we need to do a subtable lookup on the states sub table, everything else uses the countries sub table. Not too horrible. Okay, we now have heatmaps for all countries, all available zoom regions and a zoom to state level in the US, complete with some rudimentary HTML to make this feel like an application, not just a static image on a web page. Unfortunately, we also have around 140 lines of code, much of which is being repeated. There’s no sense in repeating that HTML over and over, or those logic statements doing the lookups and whatnot. So it’s time to take out the scalpel and start slicing and dicing, looking for unnecessary code. I started with the HTML. There’s just no reason to repeat that HTML in every single switch case. So I set that in some static variables in the RULE_INIT section and did away with that all together in each switch case. Next, the actual iRules logic is identical if I want to view asia or africa or europe or anything other than the US. The only difference is the HTML changing one word to tell the API where to zoom in. Using a little extra “zoom” logic, I was able to cut down most of that repetitive code as well, by having all of the switch cases other than the USA fall through to the world view case, giving us just two chunks of iRules logic to deal with. Not including the extra variables and tidbits, the core of those two chunks of logic are: foreach country [table keys -subtable countries] { append chld $country append chd "[table lookup -subtable countries $country]," } foreach state [table keys -subtable states] { append chld $state append chd "[table lookup -subtable states $state]," } Don’t stop there, though, there’s more to trim! With some more advanced trickery we can combine these two table lookups into a single piece of logic. When all is said and done, here is the final iRule trimmed down to fighting form with a single switch case handling the presentation of all the possible heatmaps generated by Google…pretty cool stuff: when RULE_INIT { set static::resp1 "<HTML><center><font size=5>Here is your site's usage by Country:</font><br><br><br><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=" set static::resp2 "&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><br><br>Zoom to region: <a href='/asia'>Asia</a> | <a href='/africa'>Africa</a> | <a href='/europe'>Europe</a> | <a href='/middle_east'>Middle East</a> | <a href='/south_america'>South America</a> | <a href='/usa'>United States</a> | <a href='/heatmap'>World</a><br><br><br><a href='/resetmap'>Reset All Counters</a></center></HTML>" } when HTTP_REQUEST { switch -glob [string tolower [HTTP::uri]] { "/asia" - "/africa" - "/europe" - "/middle_east" - "/south_america" - "/usa" - "/world" - "/heatmap*" { set chld "" set chd "" set zoom [string map {"/" "" "heatmap" "world"} [HTTP::uri]] ## Configure the table query to be based on the countries subtable or the states subtable ## if {$zoom eq "usa"} { set region "states" } else { set region "countries" } ## Get a list of all states or countries and the associated count of requests from that area ## foreach rg [table keys -subtable $region] { append chld $rg append chd "[table lookup -subtable $region $rg]," } set chd [string trimright $chd ","] ## Send back the pre-formatted response, set in RULE_INIT, combined with the map zoom, list of areas, and request count ## HTTP::respond 200 content "${static::resp1}${zoom}&chd=t:${chd}&chld=${chld}${static::resp2}" } "/resetmap" { foreach country [table keys -subtable countries] { table delete -subtable countries $country } foreach state [table keys -subtable states] { table delete -subtable states $state } HTTP::respond 200 Content "<HTML><center><br><br><br><br><br><br>Table Cleared.<br><br><br> <a href='/heatmap'>Return to Map</a></HTML>" } default { ## Look up country & state locations ## set cloc [whereis [IP::client_addr] country] set sloc [whereis [IP::client_addr] abbrev] ## If the IP doesn't resolve to anything, pick a random IP (useful for testing on private networks) ## if {($cloc eq "") and ($sloc eq "")} { set ip [expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(rand()*255) }] set cloc [whereis $ip country] set sloc [whereis $ip abbrev] } ## Set Country ## if {[table incr -subtable countries -mustexist $cloc] eq ""} { table set -subtable countries $cloc 1 indefinite indefinite } ## Set State ## if {[table incr -subtable states -mustexist $sloc] eq ""} { table set -subtable states $sloc 1 indefinite indefinite } HTTP::respond 200 Content "Added" } } } There we have it, an appropriately trimmed down and sleek application to provide worldwide or regional views of heatmaps showing traffic to your application, all generated 100% via iRules. Again, this couldn’t be done without the awesome geolocation abilities of LTM or the Google charting API or, of course, iRules. In the next installment we’ll dig even deeper to see how to turn this application into something even more valuable to those interested in what the users of your site or app are up to.459Views0likes4Comments