web service
18 TopicsThese Are Not The Scrapes You're Looking For - Session Anomalies
In my first article in this series, I discussed web scraping -- what it is, why people do it, and why it could be harmful. My second article outlined the details of bot detection and how the ASM blocks against these pesky little creatures. This last article in the series of web scraping will focus on the final part of the ASM defense against web scraping: session opening anomalies and session transaction anomalies. These two detection modes are new in v11.3, so if you're using v11.2 or earlier, then you should upgrade and take advantage of these great new features! ASM Configuration In case you missed it in the bot detection article, here's a quick screenshot that shows the location and settings of the Session Opening and Session Transactions Anomaly in the ASM. You'll find all the fun when you navigate to Security > Application Security > Anomaly Detection > Web Scraping. There are three different settings in the ASM for Session Anomaly: Off, Alarm, and Alarm and Block. (Note: these settings are configured independently...they don't have to be set at the same value) Obviously, if Session Anomaly is set to "Off" then the ASM does not check for anomalies at all. The "Alarm" setting will detect anomalies and record attack data, but it will allow the client to continue accessing the website. The "Alarm and Block" setting will detect anomalies, record the attack data, and block the suspicious requests. Session Opening Anomaly The first detection and prevention mode we'll discuss is Session Opening Anomaly. But before we get too deep into this, let's review what a session is. From a simple perspective, a session begins when a client visits a website, and it ends when the client leaves the site (or the client exceeds the session timeout value). Most clients will visit a website, surf around some links on the site, find the information they need, and then leave. When clients don't follow a typical browsing pattern, it makes you wonder what they are up to and if they are one of the bad guys trying to scrape your site. That's where Session Opening Anomaly defense comes in! Session Opening Anomaly defense checks for lots of abnormal activities like clients that don't accept cookies or process JavaScript, clients that don't scrape by surfing internal links in the application, and clients that create a one-time session for each resource they consume. These one-time sessions lead scrapers to open a large number of new sessions in order to complete their job quickly. What's Considered A New Session? Since we are discussing session anomalies, I figured we should spend a few sentences on describing how the ASM differentiates between a new or ongoing session for each client request. Each new client is assigned a "TS cookie" and this cookie is used by the ASM to identify future requests from the client with a known, ongoing session. If the ASM receives a client request and the request does not contain a TS cookie, then the ASM knows the request is for a new session. This will prove very important when calculating the values needed to determine whether or not a client is scraping your site. Detection There are two different methods used by the ASM to detect these anomalies. The first method compares a calculated value to a predetermined ceiling value for newly opened sessions. The second method considers the rate of increase of newly opened sessions. We'll dig into all that in just a minute. But first, let's look at the criteria used for detecting these anomalies. As you can see from the screenshot above, there are three detection criteria the ASM uses...they are: Sessions opened per second increased by: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second increases by a given percentage. The default setting is 500 percent. Sessions opened per second reached: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second is greater than or equal to this number. The default value is 400 sessions opened per second. Minimum sessions opened per second threshold for detection: This specifies that the ASM considers traffic to be an attack if the number of sessions opened per second is greater than or equal to the number specified. In addition, at least one of the "Sessions opened per second increased by" or "Sessions opened per second reached" numbers must also be reached. If the number of sessions opened per second is lower than the specified number, the ASM does not consider this traffic to be an attack even if one of the "Sessions opened per second increased by" or "Sessions opened per second" reached numbers was reached. The default value for this setting is 200 sessions opened per second. In addition, the ASM maintains two variables for each client IP address: a one-minute running average of new session opening rate, and a one-hour running average of new session opening rate. Both of these variables are recalculated every second. Now that we have all the basic building blocks. let's look at how the ASM determines if a client is scraping your site. First Method: Predefined Ceiling Value This method uses the user-defined "minimum sessions opened per second threshold for detection" value and compares it to the one-minute running average. If the one-minute average is less than this number, then nothing else happens because the minimum threshold has not been met. But, if the one-minute average is higher than this number, the ASM goes on to compare the one-minute average to the user-defined "sessions opened per second reached" value. If the one-minute average is less than this value, nothing happens. But, if the one-minute average is higher than this value, the ASM will declare the client a web scraper. The following flowchart provides a pictorial representation of this process. Second Method: Rate of Increase The second detection method uses several variables to compare the rate of increase of newly opened sessions against user-defined variables. Like the first method, this method first checks to make sure the minimum sessions opened per second threshold is met before doing anything else. If the minimum threshold has been met, the ASM will perform a few more calculations to determine if the client is a web scraper or not. The "sessions opened per second increased by" value (percentage) is multiplied by the one-hour running average and this value is compared to the one-minute running average. If the one-minute average is greater, then the ASM declares the client a web scraper. If the one-minute average is lower, then nothing happens. The following matrix shows a few examples of this detection method. Keep in mind that the one-minute and one-hour averages are recalculated every second, so these values will be very dynamic. Prevention The ASM provides several policies to prevent session opening anomalies. It begins with the first method that you enable in this list. If the system finds this method not effective enough to stop the attack, it uses the next method that you enable in this list. The following screenshots show the different options available for prevention. The "Drop IP Addresses with bad reputation" is tied to Rate Limiting, so it will not appear as an option unless you enable Rate Limiting. Note that IP Address Intelligence must be licensed and enabled. This feature is licensed separately from the other ASM web scraping options. Here's a quick breakdown of what each of these prevention policies do for you: Client Side Integrity Defense: The system determines whether the client is a legal browser or an illegal script by sending a JavaScript challenge to each new session request from the detected IP address, and waiting for a response. The JavaScript challenge will typically involve some sort of computational challenge. Legal browsers will respond with a TS cookie while illegal scripts will not. The default for this feature is disabled. Rate Limiting: The goal of Rate Limiting is to keep the volume of new sessions at a "non-attack" level. The system will drop sessions from suspicious IP addresses after the system determines that the client is an illegal script. The default for this feature is also disabled. Drop IP Addresses with bad reputation: The system drops requests from IP addresses that have a bad reputation according to the system’s IP Address Intelligence database (shown above). The ASM will drop all request from any "bad" IP addresses even if they respond with a TS cookie. IP addresses that do not have a bad reputation also undergo rate limiting. The default for this option is disabled. Keep in mind that this option is available only after Rate Limiting is enabled. In addition, this option is only enforced if at least one of the IP Address Intelligence Categories is set to Alarm mode. Prevention Duration Now that we have detected session opening anomalies and mitigated them using our prevention options, we must figure out how long to apply the prevention measures. This is where the Prevention Duration comes in. This setting specifies the length of time that the system will prevent an attack. The system prevents attacks by rejecting requests from the attacking IP address. There are two settings for Prevention Duration: Unlimited: This specifies that after the system detects and stops an attack, it performs attack prevention until it detects the end of the attack. This is the default setting. Maximum <number of> seconds: This specifies that after the system detects and stops an attack, it performs attack prevention for the amount of time indicated unless the system detects the end of the attack earlier. So, to finish up our Session Opening Anomaly part of this article, I wanted to share a quick scenario. I was recently reading several articles from some of the web scrapers around the block, and I found one guy's solution to work around web scraping defense. Here's what he said: "Since the service conducted rate-limiting based on IP address, my solution was to put the code that hit their service into some client-side JavaScript, and then send the results back to my server from each of the clients. This way, the requests would appear to come from thousands of different places, since each client would presumably have their own unique IP address, and none of them would individually be going over the rate limit." This guy is really smart! And, this would work great against a web scraping defense that only offered a Rate Limiting feature. Here's the pop quiz question: If a user were to deploy this same tactic against the ASM, what would you do to catch this guy? I'm thinking you would need to set your minimum threshold at an appropriate level (this will ensure the ASM kicks into gear when all these sessions are opened) and then the "sessions opened per second" or the "sessions opened per second increased by" should take care of the rest for you. As always, it's important to learn what each setting does and then test it on your own environment for a period of time to ensure you have everything tuned correctly. And, don't forget to revisit your settings from time to time...you will probably need to change them as your network environment changes. Session Transactions Anomaly The second detection and prevention mode is Session Transactions Anomaly. This mode specifies how the ASM reacts when it detects a large number of transactions per session as well as a large increase of session transactions. Keep in mind that web scrapers are designed to extract content from your website as quickly and efficiently as possible. So, web scrapers normally perform many more transactions than a typical application client. Even if a web scraper found a way around all the other defenses we've discussed, the Session Transaction Anomaly defense should be able to catch it based on the sheer number of transactions it performs during a given session. The ASM detects this activity by counting the number of transactions per session and comparing that number to a total average of transactions from all sessions. The following screenshot shows the detection and prevention criteria for Session Transactions Anomaly. Detection How does the ASM detect all this bad behavior? Well, since it's trying to find clients that surf your site much more than other clients, it tracks the number of transactions per client session (note: the ASM will drop a session from the table if no transactions are performed for 15 minutes). It also tracks the average number of transactions for all current sessions (note: the ASM calculates the average transaction value every minute). It can use these two figures to compare a specific client session to a reasonable baseline and figure out if the client is performing too many transactions. The ASM can automatically figure out the number of transactions per client, but it needs some user-defined thresholds to conduct the appropriate comparisons. These thresholds are as follows: Session transactions increased by: This specifies that the system considers traffic to be an attack if the number of transactions per session increased by the percentage listed. The default setting is 500 percent. Session transactions reached: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number. The default value is 400 transactions. Minimum session transactions threshold for detection: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number, and at least one of the "Sessions transactions increased by" or "Session transactions reached" numbers was reached. If the number of transactions per session is lower than this number, the system does not consider this traffic to be an attack even if one of the "Session transactions increased by" or "Session transaction reached" numbers was reached. The default value is 200 transactions. The following table shows an example of how the ASM calculates transaction values (averages and individual sessions). We would expect that a given client session would perform about the same number of transactions as the overall average number of transactions per session. But, if one of the sessions is performing a significantly higher number of transactions than the average, then we start to get suspicious. You can see that session 1 and session 3 have transaction values higher than the average, but that only tells part of the story. We need to consider a few more things before we decide if this client is a web scraper or not. By the way, if the ASM knows that a given session is malicious, it does not use that session's transaction numbers when it calculates the average. Now, let's roll in the threshold values that we discussed above. If the ASM is going to declare a client as a web scraper using the session transaction anomaly defense, the session transactions must first reach the minimum threshold. Using our default minimum threshold value of 200, the only session that exceeded the minimum threshold is session 3 (250 > 200). All other sessions look good so far...keep in mind that these numbers will change as the client performs additional transactions during the session, so more sessions may be considered as their transaction numbers increase. Since we have our eye on session 3 at this point, it's time to look at our two methods of detecting an attack. The first detection method is a simple comparison of the total session transaction value to our user-defined "session transactions reached" threshold. If the total session transactions is larger than the threshold, the ASM will declare the client a web scraper. Our example would look like this: Is session 3 transaction value > threshold value (250 > 400)? No, so the ASM does not declare this client as a web scraper. The second detection method uses the "transactions increased by" value along with the average transaction value for all sessions. The ASM multiplies the average transaction value with the "transactions increased by" percentage to calculate the value needed for comparison. Our example would look like this: 90 * 500% = 450 transactions Is session 3 transaction value > result (250 > 450)? No, so the ASM does not declare this client as a web scraper. By the way, only one of these detection methods needs to be met for the ASM to declare the client as a web scraper. You should be able to see how the user-defined thresholds are used in these calculations and comparisons. So, it's important to raise or lower these values as you need for your environment. Prevention Duration In order to save you a bunch of time reading about prevention duration, I'll just say that the Session Transactions Anomaly prevention duration works the same as the Session Opening Anomaly prevention duration (Unlimited vs Maximum <number of> seconds). See, that was easy! Conclusion Thanks for spending some time reading about session anomalies and web scraping defense. The ASM does a great job of detecting and preventing web scrapers from taking your valuable information. One more thing...for an informative anomaly discussion on the DevCentral Security Forum, check out this conversation. If you have any questions about web scraping or ASM configurations, let me know...you can fill out the comment section below or you can contact the DevCentral team at https://devcentral.f5.com/s/community/contact-us.942Views2likes2CommentsF5 Security on Owasp Top 10
Everyone is familiar with the Owasp Top 10. Below, you will find some notes on the Top 10, as well as ways to mitigate these potential threats to your environment. You can also download the PDF format by clicking the blankie ––> This is the first in a series that will cover the attack vectors and how to apply the protection methods. OWASP Attack OWASP DEFINITION F5 PROTECTION A1 Injection Injection flaws, such as SQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. BIG-IP ASM inspects application traffic and blocks the insertion of malicious scripts. It does so by enforcing injection attack patterns, enforcing an accurate usage of metacharacters within the URI and parameter names. ASM also looks at parameter values and can enforce pre-defined allowed values, length and accurate usage of metacharacters. A2 Cross-Site Scripting (XSS) XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation and escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites. BIG-IP ASM protects against Cross-Site Scripting attacks by enforcing XSS attack patterns, enforcing an accurate usage of metacharacters within the URI and parameter names. ASM also looks at parameter values and can enforce pre-defined allowed values, length and accurate usage of metacharacters. A3 Broken Authentication and Session Management Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, session tokens, or exploit other implementation flaws to assume other users’ identities. BIG-IP ASM enables protection by: • Using ASM’s unique login page enforcement configuration • Enforcing login page timeouts • Enabling application flow enforcement and dynamic parameter protection • Using SSL on the login page • Monitoring request attack patterns • Using ASM signed cookies so none are being manipulated A4 Insecure Direct Object References A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory,or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. If a hacker changes his account number to another random number hoping to access a different user’s account they can manipulate those references to access other objects without authorization. These can include: • Fraud (price changes, user ID changes) • Session highjacking • Enforcing parameter values with high parameters BIG-IP ASM mitigates this vulnerability by enforcing dynamic parameters (making sure values that were set by the server will not be changed on the client side). Also the admin. can whitelist the allowed URLs for the specific application and scan the requests with attack patterns. A5 Cross-Site Request Forgery (CSRF) A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim. BIG-IP ASM mitigates CSRF attacks by adding a random nonce to every URL. This nonce cannot be guessed in advance by an attacker and therefore makes the attack almost impossible. In addition, ASM is preventing XSS within an application and enforcing the application flow and dynamic parameter values. With flow access, a session timeout can be combined with an F5 iRule™ designed to note referrer header check to minimize CSRF. For instance, flow enforcement mitigates CSRF by limiting the entry points or web pages of attacks along with session timeouts being short. If referring to say www.food.com, ASM checks the referrer header in the URL to make sure it’s food.com. A6 Security Misconfiguration Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. All these settings should be defined, implemented, and maintained as many are not shipped with secure defaults. This includes keeping all software up to date, including all code libraries used by the application. BIG-IP ASM can mitigate attacks that are related to misconfiguration by using a broad range of controls starting with: • RFC enforcement • Enforcing various limits on the requests • Whitelisting the URLs and parameters names and values • Enforcing a login page • Being a native full reverse proxy A7 Insecure Cryptographic Storage Many web applications do not properly protect sensitive data, such as credit cards, SSNs, and authentication credentials, with appropriate encryption or hashing. Attackers may steal or modify such weakly protected data to conduct identity theft, credit card fraud, or other crimes. While this isn’t directly related to BIG-IP ASM or WAF, OWASP is mostly concerned with what type of encryption is used and how it is used. These are both outside of the enforcement purview of ASM; however, ASM delivers the following: • Data Guard - if someone managed to cause an information leakage, Data Guard can block it • BIG-IP certificate management allows the user to store private keys in a central and secure place. A8 Failure to Restrict URL Access Many web applications check URL access rights before rendering protected links and buttons. However, applications need to perform similar access control checks each time these pages are accessed, or attackers will be able to forge URLs to access these hidden pages anyway. There are multiple ways that BIG-IP ASM can mitigate this issue. , ASM enforces allowed file types and URLs, and accurate parameter values and login pages. BIG-IP ASM’s “flow” technology ensures that site content is only accessed by users that have acquired the proper credentials or visited the prerequisite pages. Users can only visit personal web pages if they have come from the say a user ID and password sign on web page. A9 Insufficient Transport Layer Protection Applications frequently fail to authenticate, encrypt, and protect the confidentiality and integrity of sensitive network traffic. When they do, they sometimes support weak algorithms, use expired or invalid certificates, or do not use them correctly. BIG-IP ASM significantly simplifies the implementation of SSL and certificate management by centralizing the location and administration of the server certificates in a single location rather than distributed over farms of servers. Also, by moving SSL handshaking and encryption to BIG-IP ASM, the Web servers gain an increased level of performance and efficiency. In addition ASM allows you to do the following : • Require SSL for all sensitive pages. Non-SSL requests to these pages redirected to the SSL page. Use BIG-IP SSL Acceleration in general for the whole application • Set the ‘secure’ flag on all sensitive cookies • Configure your SSL provider to only support strong (e.g., FIPS 140-2 compliant) algorithms. (Use BIG-IP 6900, 8900) • Ensure your certificate is valid, not expired, not revoked, and matches all domains used by the site. You can check with EM or scripts from Devcentral • Backend and other connections should also use SSL or other encryption technologies. Use re-encryption with Server-SSL-profile A10 Unvalidated Redirects and Forwards Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages. BIG-IP ASM mitigates this issue by enforcing unique attack patterns, enforcing accurate values of parameters and enforcing dynamic parameters.2.7KViews0likes8CommentsBIG-IP Logging and Reporting Toolkit - part one
Joe Malek, one of the many awesome engineers here at F5, took it upon himself to delve deeply into a very interesting but often unsung part of the BIG-IP advanced configuration world: logging and reporting. It’s my great pleasure to get to share with you his awesome study and the findings therein, along with (eventually) a toolkit to help you get started in the world of custom log manipulation. If you’ve ever questioned or been curious about your options when it comes to information gathering and reporting, this is definitely something you should read. There will be multiple parts, so stay tuned. This one is just the intro. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Description F5 products occupy critical positions in application delivery infrastructure. They serve as gateways, proxies, accelerators and traffic flow arbiters. In these roles customer expectations vary for the degree and amount of event information recorded. Several opportunities exist within our current product capabilities for our customers and partners to produce and consume log messages from and via F5 products. Efforts to date include generating W3C style log messages on LTM via iRules, close integration with leading vendors and ASM (requires askf5 login), and creating relationships with leading vendors to best serve our customers. Significant capabilities exist for customers and partners to create their own logging and reporting solutions. Problems and opportunity In the many products offered by F5, there exists a variety of logging structures. The common log protocols used to emit messages by F5 products are Syslog (requires askf5 login) and SNMP (requires askf5 login), along with built-in iRulescapabilities. Though syslog-ng is commonplace, software components tend to vary in transport, verbosity, message formatting and sometimes syslog facility. This can result in a high degree of data density in our logs, and messages our systems emit can vary from version to version.[i] The combination of these factors results in a challenge that requires a coordinated solution for customers who are compelled by regulation, industry practice, or by business process, to maintain log management infrastructure that consumes messages from F5 devices.[ii] By utilizing the unique product architecture TMOS employs by sharing its knowledge about networks and applications as well as capabilities built into iRules, TMOS can provide much of this information to log management infrastructure in a simple and knowledgeable manner. In effect, we can emit messages about appliance state and offload many message logging tasks from application servers. Based on our connection knowledge we can also improve the utility and value of information obtained from vendor provided log management infrastructure.[iii] Objectives and success criteria The success criteria for including an item in the toolkit is: 1. A capability to deliver reports on select items using the leading platforms without requiring core development work on an F5 product. 2. An identified extensibility capability for future customization and report building. Assumptions and dependencies Vendors to include in the toolkit are Splunk, Q1Labs and PresiNET ASM logging and reporting is sufficient and does not need further explanation Information to be included in sample reports should begin to assist in diagnostic activities, demonstrate ROI by including ROI in an infrastructure and advise on when F5 devices are nearing capacity Vendor products must be able to accept event data emitted by F5 products. This means that some vendors might have more comprehensive support than others. Products currently supported but not in active development are not eligible for inclusion in the toolkit. Examples are older versions of BIG-IP and FirePass, and all WANJet releases. Some vendor products will require code modifications on the vendor’s side to understand the data F5 products send them. [i] As a piece of customer evidence, Microsoft implemented several logging practices around version 9.1. When they upgraded to version 9.4 their log volume increased several-fold because F5 added log messages and changed existing messages. As a result existing message taxonomy needed to be deprecated and we caused them to need to redesign filters, reports and create a new set of logging practices. [ii] Regulations such as the Sarbanes-Oxley Act, Gramm Leach Blyley Act, Federal Information Security Management Act, PCI DSS, and HIPPA. [iii] It is common for F5 products to manipulate connections via OneConnect, NATs and SNATs. These operations are unknown to external log collectors, and pose a challenge when assembling a complete view of the network connections between a client and a server via an F5 device for a single application transaction. What’s Next? In the next installment we’ll get into the details of the different vendors in question, their offerings, how they work and integrate with BIG-IP, and more. Logging and Reporting Toolkit Series: Part Two | Part Three730Views0likes1CommentF5 Security on Owasp Top 10: Injections
->Part of the F5/Owasp Top Ten Series At the top of the Owasp list is Injections. Their definition is “Injection flaws, such as SQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. “ Long story short, it’s is allowing unsanitized input into a program field that has the potential for execution. (which is darn near everywhere these days) Everyone knows the story of little bobby: In all honesty, I thought that Bobby’s Mom had a very valid point. Let’s Inject: Basic injection attacks are fairly simple to perform. We find an input parameter and try to send it something nefarious. In my labs, I’ve got a nice little auction site up and running. Everyone loves an auction. To bid on items, of course, you need to be authenticated. Well, being the evilHacker, I want to get in without using my credentials. This is where the injection comes in. Using either passive intelligence gathering or just guessing due to the common usages, I decide to try a simple SQL-Injection attack: The input we are injecting into is the USERNAME field: Username: ‘ or 1=1 # Pre-Injection: Post Injection: Huh… a logged in user of ‘ or 1=1 #? Rut Row Shaggy! So what is going on here? Lets look at the code at play: <php yadda yadda yadda $query = "select id from users where nick='$username' and password='".md5($MD5_PREFIX.$password)."' and suspended=0"; It says: Find me the user who’s username and password matches the input (username, plus some MD5 fun on the password) AND whose account is not suspended. How nice. So what evilHacker did was make that simple query say: $query = "select id from users where nick=`‘ or 1=1 # and password='".md5($MD5_PREFIX.$password)."' and suspended=0"; Now it says: Find me the user <no one> or 1=1 (1=1 is a truth statement). In essence you get a select all records that exist in the table users. Not a very strong front door eh? Let’s fix Wouldn’t it be nice if they could just fix it at the code level and be done with it? Well, this one they can (fairly simple escaping of characters). But we all know is reality, most code changes require scrums, waterfalls, validations, testing, and a flood of tears. In our case, we already have the Virtual server for this website on the LTM/ASM (Virtual Edition 11.1). It’s a few steps to get the ASM in place to defend: 1. Create the HTTP Class for the Virtual Server: A. Local Traffic –> Virtual Servers –> Profiles –>Protocol –> HTTP Class –>Create B. Give it a name and select Enabled for the “Application Security” 2. Go to Application Security –>Security Policies. Select “Configure Security Policy” 3. For our case, we are going to “Create a Policy Manually” 4. Set the Policy Language. Here we are using UTF-8. 5. Select the Signature Lists you want to use. For our site, we are going to run with the defaults and for the sake of the demonstration, I turned off staging. This way, we get immediate blocks (yay satisfaction!) 6. Now we configure the Wildcard Tightening. It will put a wildcard in place so that we can have a chance to learn parameters as they are used. Hit next and finished. 7. Now, we apply that HTTP Class to the virtual server. This hooks it into the asm. A. Local Traffic –> Virtual Servers –> Your VS – >Resources –>HTTPClass Profiles B. Add your profile and update 8. Now we are passing traffic through the ASM, I hit the page and log in. In the ASM Learning section, I now see the parameters for username and password. A. Application Security –>Policy Building –>Manual –> Traffic Learning. Click on Parameters and we see them. B. For now, I hit accept all. 9. Now the parameters are in the policy, the policy is listening transparently. I want to take the out of staging, so I get immediate blocks. I go to Policy –> Parameters –> Parameter List and select each parameter I want to remove from staging. Here, I do password and username. 10. Now, I want to put the policy in blocking mode, to see this bad boy in action. A. Click Policy B Set Enforcement Mode to Blocking C. Profit? Or hit Save, then apply policy in the top right Now: Pre-Injection: Post Injection: Why the Block? Now the coolest part, we, as the admins, can see why the block happened. We go to Application Security –> Reporting –>Requests. Put the Support ID into the filter. It returns the full request, why it was blocked, and the options to learn it as a false positive. Pretty cool huh? This is only the tip of the iceberg for what fun we can have with the ASM. Part of the F5/Owasp Top Ten Series569Views0likes2CommentsMonitors: Web Service - Correct Configuration
I have a SOAP Web Service which returns "OK" when al of the components in one half of the infrastructure stack are healthy and functioning correctly. I can test the Web Service from SOAP UI (and other sources) and it works as required. When I create a LTM Web Service Monitor and attach it to the member of a Pool List the monitor is consistently reporting the Web Service is not returning the success message ("OK"). I have searched the documentation and DevCentral for a basic example to ensure my configuration of the monitor is correct, but I cannot find one. If anyone can point to example that can assist me in determining whether or not I have configured the Monitor correctly, it would be appreciated.302Views0likes1CommentBIG-IP Logging and Reporting Toolkit – part four
So far we’ve covered the initial problem, the players involved and one in-depth analysis of one of the options (splunk). Next let’s dig into Q1labs’ Qradar offering. The first thing you’ll need to do, just like last time, is make sure your BIG-IP to pass syslog traffic off the box. Here’s a simple example of how you can get that done in your config file. These are the same as last time, so nothing shockingly new here, though this bit is important. Logging & Reporting Toolkit - Part 1 Logging & Reporting Toolkit - Part 2 Logging & Reporting Toolkit - Part 3 Logging & Reporting Toolkit - Part 4 Bigip v9 syslog { remote server 10.10.200.31 } Bigip v10 syslog { remote server { qradar { host 10.11.100.31 } } } This will send all syslog messages from the BIG-IP to the QRadar system; both BIG-IP system messages and any messages from iRules. If you’re interested in having iRules log to the QRadar system directly you can use the HSL statements or the log statements with a destination host defined. Ex) RULE_INIT has set ::QRadarHost “10.10.200.31” and then in the iRules event you’re interested in you assemble $log_message and then sent it to the log with log $::QRadarHost $log_message . A good practice would be to also record it locally on something like local0 incase the message doesn’t make it to the QRadar system. In my testing I used a single QRadar system running the log collector and event processor. If you’re using a more sophisticated deployment you’ll need to use the Deployment Manager to ensure that the QRadar log collectors are forwarding messages onto the Event Processor you’re going to work with. My QRadar system was already setup to receive syslog messages on port 514, so there wasn’t anything more to do to get messages flowing. The key to working with QRadar is defining regular expressions to extract the message data you’re interested in – once you have that done most things are done using the same process. In this section I’ll walk through all the tasks needed to extract custom data through build a report for the w3c case. Then I’ll show a summary using NEDS and dashboard data. Here are my regexes for QRadar for w3c, NEDS and the dashboard data script: message source attribute name regex capture group sample message dashboard script Compression Deflate uses deflate\.out\.uses='(\d+)' 1 in dc post dashboard script Compression LZO uses lzo\.out\.uses='(\d+)' 1 in dc post dashboard script Compression Null uses null\.out\.uses='(\d+)' 1 in dc post dashboard script Dashboard-messageType message_type='(.+?)' 1 in dc post dashboard script Dashboard-reportingSystem HostName='(.+?)' 1 in dc post dashboard script Dashboard-routingEnabled routing='(.+?)' 1 in dc post NEDS iRule NEDSv1-Flow-clientside-http "(neds\.f5\.conn\.start\.v1)",(\"[\w\.resp\.v1]+\"\,)+\"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}\-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\" 1 in NEDS Spec NEDS iRule NEDSv1-clientIPaddress "(neds\.f5\.conn\.start\.v1","[\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) 2 in NEDS Spec NEDS iRule NEDSv1-clientPort "(neds\.f5\.conn\.start\.v1","[\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:)(\d{1,5}) 3 in NEDS Spec NEDS iRule NEDSv1-clientCloseBytesIn (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+),(\d+),(\d+) 7 in NEDS Spec NEDS iRule NEDSv1-clientCloseBytesOut (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+),(\d+),(\d+),(\d+) 8 in NEDS Spec NEDS iRule NEDSv1-clientClosePktsIn (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+), 5 in NEDS Spec NEDS iRule NEDSv1-clientClosePktsOut (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+),(\d+),(\d+) 6 in NEDS Spec NEDS iRule NEDSv1-clientCloseTimestamp (neds\.f5\.conn\.end\.v1)","([\w\.]+)","(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5}@\d+\.\d+)",(\d+.\d+), 4 in NEDS Spec NEDS iRule NEDSv1-clientConnectionIngressVlan (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+)\,"(\w+)" 4 in NEDS Spec NEDS iRule NEDSv1-clientConnectionPolicyName (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+)\,"(\w+)"\,(\d+),(\d+),(\d+),\"([\w\.]+)\" 8 in NEDS Spec NEDS iRule NEDSv1-clientConnectionStartTimestamp (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+) 3 in NEDS Spec NEDS iRule NEDSv1-clientIPProtocol (neds[\w\.]+start\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+)\,"(\w+)",(\d+), 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestHost (neds\.f5\.http\.req\.v1)",("[\w\.\"\:\-\@]+)","([\w\.\:\-\@]+)",(\d+\.\d+,\d+),"([\w\.\_\-]+) 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestServerPort (neds\.f5\.http\.resp\.v1)","([\w\.]+)","([\d\.:]+)-([\d\.]+):(\d{1,5}) 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestTCPReplyNumber (neds\.f5\.http\.req\.v1)",("[\w\.\"\:\-\@]+)","([\w\.\:\-\@]+)",(\d+\.\d+),(\d+) 5 in NEDS Spec NEDS iRule NEDSv1-httpRequestUserAgent (neds\.f5\.http\.req\.v1)",("[\w\.\"\,\:\-\@]+)","([\w/\._\%\@]+)",("[\w\@\.]*?"),"([\w/\.\s(;\-\:\)]+) 5 in NEDS Spec NEDS iRule NEDSv1-httpResponseContentLength (neds\.f5\.http\.resp\.v1)","([\w\."\,\:\-\@]+)","([\w\/\;\s\=\-]+)","(\d+) 4 in NEDS Spec NEDS iRule NEDSv1-httpResponseContentType (neds\.f5\.http\.resp\.v1)","([\w\."\,\:\-\@]+)","([\w\/\;\s\=\-]+) 3 in NEDS Spec NEDS iRule NEDSv1-httpResponseLBTarget (neds\.f5\.http\.resp\.v1)","([\w\.\,:\-@/;\s\="]+),"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:\d{1,5})" 3 in NEDS Spec NEDS iRule NEDSv1-reportingSystem (\"neds.+[\w]\.v1\"),\"([\w.]+)\" 2 in NEDS Spec NEDS iRule NEDSv1-responseHTTPContentLength (neds[\w\.]+resp\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+),(\d+),"(\d{3})","([\w\/]+)","(\d+)" 7 in NEDS Spec NEDS iRule NEDSv1-responseHTTPServerResponseCode (neds[\w\.]+resp\.v1\",\"[\w\.]+",")(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}-\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:\d{1,5}@\d+\.\d+)\",(\d+\.\d+),(\d+),"(\d{3})" 5 in NEDS Spec w3c iRule W3C Client IP address client_ip=(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) 1 in dc post w3c iRule W3C Client Port client_port=(\d{1,5}) 1 in dc post w3c iRule W3C Client username username=([\w]+) 1 in dc post w3c iRule W3C Content Length content_length=(\d+) 1 in dc post w3c iRule W3C HTTP Request request="(.*)"\ss 1 in dc post w3c iRule W3C HTTP version HTTP/(\d\.\d)" 1 in dc post w3c iRule W3C Host header host=(.+?) 1 in dc post w3c iRule W3C Member server lb_server=(\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}:\d{1,5}) 1 in dc post w3c iRule W3C Server Response Code server_status=(\d{3}) 1 in dc post w3c iRule W3C User Agent user_agent="(.*)" 1 in dc post w3c iRule W3C VIrtual Server name virtual=(.*?)\s 1 in dc post w3c iRule w3c Server Port lb_server=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}:(\d{1,5}) 1 in dc post w3c iRule w3c referer referer=([\w\./\:\-]+) 1 in dc post w3c iRule w3c resp_time resp_time=(\d+) 1 in dc post W3C offload case Now that BIG-IP is setup to send messages to the QRadar system double check to see that you’ve installed the w3c-client-logging iRule on a vip and we’ll see what it looks like when everything is put together. Login to your QRadar management console and navigate to the Events tab. You should see events streaming into QRadar if everything is configured correctly. If you can’t find what you’re looking for in the normalized events view change it to the raw view – I also opted to have my console autorefresh every minute. There last message is the one I’m looking for. If you find a similar message and double click on it we can start to extract data from the message and build up some searches and reports. I’ve not fully customized my QRadar deployment, so I’m ignoring the fact that the Log Source for the iRule messages has been identified by the system’s FastIronDsm. After your screen is showing the Event Viewer click on the Extract Property button in the button bar. This will launch the Custom Event Property Definition tool, which will allow you to categorize event elements and write the regex for extracting the information you’re interested in. Right now, I’m interested in the HTTP server status codes. For your custom field extractions, here’s a sample message from the W3C iRule: Feb 9 14:23:21 tmm tmm[5088]: Rule w3c-client-logging : virtual=www.f5demo.com_http client_ip=65.197.145.92 client_port=37227 lb_server=10.10.200.1:80 host=www.f5demo.com username= request="GET /compression HTTP/1.0" server_status=301 content_length=322 resp_time=1 user_agent="check_http/1.96 (nagios-plugins 1.4.5)]" referer= The regular expressions for key value pairings are pretty easy to create. In this window you can see that the regex has located the item in the log message I’m interested in – it’s highlighted in yellow. Save the regex extraction and you’ll be returned to the Event Viewer and look for the new property listed on the page. Now the attribute shows up on the Event list page, down at the bottom. I’ve already entered several regular expressions for the NEDS data, and since I’ve assigned them all to the same Device Support Module (DSM) they’re showing up on this page; and this isn’t a NEDS message so they’re not applicable to this stream. With the extraction we just assigned to this DSM and log source we can return to the Event List and build a search. After the search is built it can be used to filter the events list, and we can build a report from it. In the Event Viewer Click on the Search button and select the New Event Search option. I’ve also added extractions to my system for response time, and member server. This helps to further illustrate what’s happening in my environment – I can see what hosts are sending which response codes and get a rough idea on what the client and server performance is. Pick the fields you’re interested in including in the search and click on the ‘Filter’ button. QRadar composes the search and saves it to the list of defined searches. I could also add a regex to extract the BIG-IP name from the message and group by that attribute to get an idea of what’s happening across the various BIG-IPs in my environment. Now the search runs – and I find that in the last 6 hours there have been 147 HTTP 304 response codes recorded by the system. Here’s my search result: /p> I see that in the last 6 hours there have been 147 HTTP 304’s recorded by the system. To turn this search into a report or make it available to the dashboard, click on the Save Criteria button in the toolbar. I’ve found that it helps to group searches together, so I’ve created a group called BIG-IP for all my BIG-IP related searches. For this search to appear on your dashboard you’ll also need to click the “Add item…” button on the dashboard and locate your search. To generate the report, click the Reports tab and find the Actions dropdown in the tool bar – select Create. I’m building a manual report for this step. My report uses a single frame and Events/Logs as the information source. And here’s my report: Accounting for the date format, there were a lot of 304’s returned to clients on March 5 th and I probably have data missing from March 10 th onwards because my BIG-IP was sending log messages somewhere else. NEDS case While the w3c offload case used an iRule with key/value tuples NEDS uses a comma delimited string to convey information in the message. I spent some time with the specification and wrote several regular expressions to extract the data. The process is identical to what’s outlined in the w3c case, so I’ll save the screen real estate and skip the screen shots of the process. You can find my regular expressions here – I’m fairly new to regular expressions, so I’m sure that there are improvements that can be made to make mine more efficient/maintainable. After defining the custom attributes here’s what I get when viewing a NEDS message in the Event Viewer. To syslog, A NEDS message looks like Mar 30 10:44:59 tmm tmm[5088]: Rule networkEventDataStream <HTTP_RESPONSE>: "neds.f5.http.resp.v1","bigip9.f5demo.com","65.197.145.92:42709-65.197.145.93:80@1269971099.951082",1269971099.952527,1,"301","text/html; charset=iso-8859-1","322","10.10.200.1:80","65.197.145.92:42709-10.10.200.1:80" Here’s the detailed view using the regular expressions for http response, client close, http request and client accepted messages. HTTP Response Client close HTTP Request Client Accepted < Here’s a select result of a search I composed for the connection close data. The clientCloseBytesOut is not N/A filters out all the non-client-close NEDS messages. The process to generate a report for NEDS data mirrors the process for the W3C case: 1. Save the search 2. Create a report template 3. Add the search to the report template 4. Save the template 5. Run the report Dashboard data case To get the dashboard data streaming into QRadar, I had to modify my base script to send the messages via syslog, instead of just printing the string. In addition to the QRadar and BIG-IP systems, you’ll need another host with the requisite Perl modules installed to relay the data from the BIG-IP to the QRadar. Here’s the script: Dashboard Syslog Pearl Script To use it you’ll need to: 1. On line 66 configure the username the script should use to access the dashboard interface 2. On line 67 configure the password for the username from step 1 3. On line 48 configure the IP address or name that for the QRadar event collector 4. Schedule the script to run periodically on your relay host – cron would do nicely. Once you’ve got the data into QRadar you’ll need to: 1. Find the endpoint-isession-stat data log message and write your regular expressions for the data you’re interested in 2. Find the remote endpoint log message you’re interested in and write your regular expressions for the data you’re interested in 3. Build and save your searches 4. Build and run your report template Here’s a sample endpoint-isession-stat data log message in syslog: Mar 15 17:05:46 127.0.0.1 10.11.100.73: device_timestamp='Wed Mar 15 00:05:46 2010 GMT' HostName='bigip3900c.demo.f5demo.com' version.version='10.1.0' message_type='endpoint_isession_stat' name='_tunnel_ctrl_10.20.50.103' peer_ref='00:00:00:00:00:00:00:00:00:00:ff:ff:0a:14:32:67' null.in.uses='292670' null.in.errors='0' null.in.bytes_opt='31371493' null.in.bytes_raw='29030133' null.out.uses='292670' null.out.errors='0' null.out.bytes_opt='31371493' null.out.bytes_raw='29030133' lzo.in.uses='1250' lzo.in.errors='0' lzo.in.bytes_opt='139167' lzo.in.bytes_raw='124178' lzo.out.uses='1250' lzo.out.errors='0' lzo.out.bytes_opt='139166' lzo.out.bytes_raw='124177' deflate.in.uses='0' deflate.in.errors='0' deflate.in.bytes_opt='0' deflate.in.bytes_raw='0' deflate.out.uses='0' deflate.out.errors='0' deflate.out.bytes_opt='0' deflate.out.bytes_raw='0' dedup.in.uses='0' dedup.in.errors='0' dedup.in.bytes_opt='0' dedup.in.bytes_raw='0' dedup.out.uses='0' dedup.out.errors='0' dedup.out.bytes_opt='0' dedup.out.bytes_raw='0' dedup_in.hit_bytes='0' dedup_in.hits='0' dedup_in.hit_hist.bucket_1k='0' dedup_in.hit_hist.bucket_2k='0' dedup_in.hit_hist.bucket_4k='0' dedup_in.hit_hist.bucket_8k='0' dedup_in.hit_hist.bucket_16k='0' dedup_in.hit_hist.bucket_32k='0' dedup_in.hit_hist.bucket_64k='0' dedup_in.hit_hist.bucket_128k='0' dedup_in.hit_hist.bucket_256k='0' dedup_in.hit_hist.bucket_512k='0' dedup_in.hit_hist.bucket_1m='0' dedup_in.hit_hist.bucket_large='0' dedup_in.miss_bytes='0' dedup_in.misses='0' dedup_in.miss_hist.bucket_1k='0' dedup_in.miss_hist.bucket_2k='0' dedup_in.miss_hist.bucket_4k='0' dedup_in.miss_hist.bucket_8k='0' dedup_in.miss_hist.bucket_16k='0' dedup_in.miss_hist.bucket_32k='0' dedup_in.miss_hist.bucket_64k='0' dedup_in.miss_hist.bucket_128k='0' dedup_in.miss_hist.bucket_256k='0' dedup_in.miss_hist.bucket_512k='0' dedup_in.miss_hist.bucket_1m='0' dedup_in.miss_hist.bucket_large='0' dedup_out.hit_bytes='0' dedup_out.hits='0' dedup_out.hit_hist.bucket_1k='0' dedup_out.hit_hist.bucket_2k='0' dedup_out.hit_hist.bucket_4k='0' dedup_out.hit_hist.bucket_8k='0' dedup_out.hit_hist.bucket_16k='0' dedup_out.hit_hist.bucket_32k='0' dedup_out.hit_hist.bucket_64k='0' dedup_out.hit_hist.bucket_128k='0' dedup_out.hit_hist.bucket_256k='0' dedup_out.hit_hist.bucket_512k='0' dedup_out.hit_hist.bucket_1m='0' dedup_out.hit_hist.bucket_large='0' dedup_out.miss_bytes='0' dedup_out.misses='0' dedup_out.miss_hist.bucket_1k='0' dedup_out.miss_hist.bucket_2k='0' dedup_out.miss_hist.bucket_4k='0' dedup_out.miss_hist.bucket_8k='0' dedup_out.miss_hist.bucket_16k='0' dedup_out.miss_hist.bucket_32k='0' dedup_out.miss_hist.bucket_64k='0' dedup_out.miss_hist.bucket_128k='0' dedup_out.miss_hist.bucket_256k='0' dedup_out.miss_hist.bucket_512k='0' dedup_out.miss_hist.bucket_1m='0' dedup_out.miss_hist.bucket_large='0' outgoing.conns_idle_cur='0' outgoing.conns_idle_max='0' outgoing.conns_idle_tot='0' outgoing.conns_active_cur='2' outgoing.conns_active_max='3' outgoing.conns_active_tot='3' outgoing.conns_errors='0' outgoing.conns_passthru_tot='0' incoming.conns_idle_cur='0' incoming.conns_idle_max='0' incoming.conns_idle_tot='0' incoming.conns_active_cur='2' incoming.conns_active_max='6' incoming.conns_active_tot='127' incoming.conns_errors='0' incoming.conns_passthru_tot='0' dedup_status_array='cccc ' Here’s a sample remote endpoint log message in syslog: Mar 15 17:05:50 127.0.0.1 10.11.100.73: device_timestamp='Wed Mar 15 00:05:46 2010 GMT' HostName='bigip3900c.demo.f5demo.com' version.version='10.1.0' message_type='woc_peer' peer_ref='10.20.50.103' name='bigip3900b.demo.f5demo.com' UUID='cd18:4840:f9e0:' mgmt_addr='10.11.100.72' version='10.1.0' dedup_cache='203588' dedup_action='DEDUP_ACTION_NONE' dedup_cache_refresh_flag='false' dedup_cache_refresh_count='0' state='WOC_PEER_STATE_READY' is_enabled='true' origin='MCP_ORIGIN_CONFIGURED' profile_serverssl='' tunnel_encrypt_data='true' tunnel_port='443' behind_nat='false' source_address='WOC_PEER_NAT_SOURCE_ADDRESS_NONE' config_status='none' routing='true' addr_list='' And lastly if you’re an ASM or APM user there’s an F5 Networks DSM that will recognize your ASM logs; all you need to do is define the hostname or IP address of your QRadar system on your ASM logging profile.644Views0likes1CommentAPM Session Invalidation Using ASM
Introduction: Whenever customers expose their internal resources on the Web using VPNs or SSL VPNS there is still some concern over what type of traffic comes through the connection. In order to assist with these concerns we can provide a combined SSLVPN solution with added Application Security using the APM and ASM modules. The guidance for configuring these two modules can be found at the link below: http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/big_ip_mod_interop_10_1_0.html?sr=16032121 However, once the customer has configured both APM and ASM there is still a residual concern, what about the APM session? The following guide provides a means to configure an environment with both APM and ASM modules by using an iRule to track the APM session and end it if an ASM violation occurs. With that in mind, let’s get started. Overview: We will cover a scenario where the customer will have a Logon Page generated by APM and secured by ASM. This logon page will then authenticate against a backend system and allow access to the desired web application; the application will also be protected by ASM. Logical Flow: 1. Client connects to the vs_client VIP 2. ASM policy applied and VTV redirects to vs_internal 3. Client lands on APM logon page and enter credentials 4. Credentials are verified against RADIUS server 5. Client session is secured by second ASM Policy 6. Client request LB’d to server With Auto Last Hop and SNAT Automap the flow will follow the same path out. ASM Policies: In order to secure the Logon Page generated by APM we will create two ASM policies. If the two policies are the same the system will create two entries for each violation, this could complicate the troubleshooting and logging process. First: Policy built to secure Logon Page Second: Policy built to secure the customer application Virtuals: The actual user deployment may vary however for our example we needed to cover both HTTPS and HTTP objects on the webpage. As such we created two client facing virtuals as shown above. In order to maintain a single session id per user we have the two client facing virtuals pass traffic through vs_internal. This is done by creating a virtual on port 80 and redirects to 443. Once the traffic is on port 443 it will go through the flow described in the Flow section. The second option available for deploying mixed environments is creating a client facing virtual and server facing virtual pair. This option is not viable when attempting to invalidate sessions with the iRule as we are unable to invalidate both sessions based on a single violation. A diagram of this configuration is below for clarification. iRule: The following iRule provides a means to invalidate sessions based on the rules configured in the ASM module 1: when ACCESS_ACL_ALLOWED { 2: set mrhsession [HTTP::cookie value "LastMRH_Session"] 3: 4: if { [table lookup $mrhsession] == "violation" } { 5: set user_logon [ACCESS::session data get "session.logon.last.username"] 6: set sessionid [ACCESS::session data get "session.user.sessionid"] 7: 8: log local0.warn "ASM VIOLATION - Session: $sessionid, User: $user_logon" 9: ACCESS::session remove 10: table delete $mrhsession 11: } 12: } 13: 14: when ASM_REQUEST_VIOLATION { 15: set mrhsession [HTTP::cookie value "LastMRH_Session"] 16: 17: if { $mrhsession != ""} { 18: table set $mrhsession "violation" 19: log local0.warn "ASM VIOLATION - MRHSession: $mrhsession" 20: } 21: } The iRule uses tables to maintain variables between scopes. By using a table the Events are able to share variables and act according to the value of those variables. With the iRule above, whenever a violation occurs “violation” is inserted into the table matching the APM session id. If the particular session that triggers the next “ACCESS_ACL_ALLOWED” matches that session id and the value “violation” is present the session is removed. Gotchas: Keep in mind that this iRule heavily relies on the ASM policy actually blocking the violations. For example, without the iRule blocking a SQL insertion, the page would still display the information requested. If the ASM policy had caught and blocked this request it would not display the information and on subsequent requests the user would find that their session is no longer valid. However, if the ASM policies are not properly configured before putting this iRule in place it will not work as intended. Many thanks go to Josh Mendoza for the fine work on writing up this solution, and passing it along to be published. If anyone out there has any questions or comments, feel free to leave them here and we'll pass them along.438Views0likes1CommentNew Geolocation Capabilities in v10.1
With the BIG-IP GTM and 3-DNS products, location-based service has existed in the form of topology-based load balancing. The possibilities grow exponentially in v10.1, as you now have the capabilities in GTM and LTM. In this tech tip, we’ll discuss the iRules command access, the update procedures for keeping your data current, and some use cases for consideration. Introduction The level of detail has grown from continent and country on GTM to continent, country, state (or similar provincial boundaries outside the U.S), carrier, and organization, available on GTM and on LTM and other modules with iRules access. There are native GUI tie-ins in the GTM, but if using in iRules, the command usage is actually quite simple: [whereis [IP::client_addr]] – returns a list with continent, country, state, and city (not yet available) [whereis [IP::client_addr] continent] – returns the continent [whereis [IP::client_addr] country] – returns the continent [whereis [IP::client_addr] <state|abbrev>] – returns the state information as word or as two-letter abbreviation [whereis [IP::client_addr] isp] – returns the carrier [whereis [IP::client_addr] org] - returns the registered organization If looking up several levels, a quick performance check revealed that setting the list to a single variable and using lindex is 3x more efficient than using the whereis keywords. Note that unavailable data will be returned as an empty string for these fields. Updating Your Geolocation Data Previously, to update your location data required a TMOS version upgrade. Now, the updates are released frequently and made available at https://downloads.f5.com. Also, if you want to use the org keyword with the whereis command, you’ll need to download the update or you’ll get an error during execution. The F5 schema supports additional data fields that are documented on the wiki page that will be available in the future. You can experiment with the commands now—it will simply return an empty string or a zero. Sub state granularity requires data not included by default on BIG-IP. Contact F5 sales if you are interested in more detailed granularity. To get started, go out to the downloads site, select the latest zip file, download it, then extract locally and then upload the RPMs to the LTM (or upload the archive and use unzip on the BIG-IP). I extracted locally and used Putty’s pscp executable: C:\Users\rahm\Downloads>pscp geo*.rpm root@172.16.99.128:/var/tmp/ Using keyboard-interactive authentication. Password: geoip-data-ISP-1.0.0-2010 | 2398 kB | 2398.5 kB/s | ETA: 00:00:00 | 100% geoip-data-Org-1.0.0-2010 | 53565 kB | 6695.7 kB/s | ETA: 00:00:00 | 100% geoip-data-Region2-1.0.0- | 12708 kB | 4236.0 kB/s | ETA: 00:00:00 | 100% Now that you have the files local, you can install them: [root@localhost:Active] tmp # ls | grep geo.*.rpm geoip-data-ISP-1.0.0-20100201.28.0.i686.rpm geoip-data-Org-1.0.0-20100201.28.0.i686.rpm geoip-data-Region2-1.0.0-20100201.28.0.i686.rpm [root@localhost:Active] tmp # geoip_update_data geoip-data-ISP-1.0.0-20100201.28.0.i686.rpm [root@localhost:Active] tmp # geoip_update_data geoip-data-Org-1.0.0-20100201.28.0.i686.rpm [root@localhost:Active] tmp # geoip_update_data geoip-data-Region2-1.0.0-20100201.28.0.i686.rpm For further details on the usage of the geolocation database, reference Solution 11176, the TMOS Management Guide, and the Configuration Guide for GTM. All three documents require support logins. Use Cases There are many use cases for utilizing geolocation data, but we’ll look at just a few below. First, there are a couple things to note on the use of the geolocation data. Yep, we’re talking EULA. The data is purchased by F5 for use on BIG-IP systems and products for traffic management. The key to understanding EULA compliance is to figure out where the geolocation decision is being made. It is a direct violation of the EULA to use F5’s data to embed geolocation information or codes representing geolocation information into the requests such that another application or server could make the decision on what to do with that data. Customers wishing to use geolocation data on their webservers or in their applications to make decisions in those products should reach out their account team for guidance. F5’s traffic management products have a lot of power and flexibility and can make lots of decisions about traffic using the geolocation data on the BIG-IP. For example, a geolocation lookup can be used to route traffic requests to a different site, different server, different URL, or even substitute a different image, object, etc in the stream. The key is that the BIG-IP is making use of the data to make a decision to take some action. These are all allowed and in fact, intended usage of the geolocation data. Passing the data looked up to another system or displaying it back publicly is a violation of the basic data EULA. To summarize, all usage of the data must remain local to the system with the following two exceptions: Location can be placed in an encrypted cookie for reference ONLY by other BIG-IP devices Logging data can contain location info and collected into a central logging solution for analysis of F5 logs. Note that you can get a waiver of the EULA. In the near future, F5 expects to have expanded data sets available with less restrictions on the use cases. If you’re unsure, run your use case by your sales engineer. Localizing Content Many websites have the language options featured top-right, footer, or sometimes in the navigation itself. You can still make this option available (case in point, I was in Belgium on business a couple weeks ago but much prefer to read my sites in English, thank you) but now with LTM access you can auto-switch the content. when CLIENT_ACCEPTED { switch [whereis [IP::client_addr] country] { US { pool usa } CA { pool canada } MX { pool mexico } default { pool northamerica } } } Regulatory Compliance Perhaps you face restrictions on where your clients can access your application from. This iRule will prevent users not originating from Missouri or Illinois from accessing the application when CLIENT_ACCEPTED { if { !(([whereis [IP::client_addr] abbrev] equals "MO") or ([whereis [IP::client_addr] abbrev] equals "IL")) } { pool rejected } } Redirecting to Closer Geography This is one use case that solves some of the difficulties in global load balancing. With gslb, accuracy of geographic distribution relies on well-designed ldns infrastructure. I know several entities where all ldns is centralized, which leads to problematic distribution. Now, you can analyze at the local level and redirect accordingly. Assuming you built a data group with all the states split into regions (ie, “MO” := “midwestpool”), you could build a simple irule to make sure your clients use the datacenter nearest them. when CLIENT_ACCEPTED { set region [class match -value [whereis [IP::client_addr] abbrev] equals us_regions] if { $region ne "" } { switch $region { midwest { pool $region } east { HTTP::redirect http://my-east.application.com } south { HTTP::redirect http://my-south.application.com } west { HTTP::redirection http://my-west.application.com } } else { pool default } } Another way you might approach that: when HTTP_REQUEST { set region [class match -value [whereis [IP::client_addr] abbrev] equals us_regions] if { $region ne "" } { if { $region equals "midwest" } { pool $region } else { HTTP::redirect http://my-$region.application.com } } else { pool default } } *Note—all these iRules are conceptual in nature and are untested. Conclusion Accurate, updatable geolocation data, integrated into BIG-IP. We’re really just scratching the surface here. Exciting things to come, stay tuned. Get the Flash Player to see this player.949Views0likes8CommentsMore Web Scraping - Bot Detection
In my last article, I discussed the issue of web scraping and why it could be a problem for many individuals and/or companies. In this article, we will dive into some of the technical details regarding bots and how the BIG-IP Application Security Manager (ASM) can detect them and block them from scraping your website. What Is A Bot? A bot is a software application that runs automated tasks and typically performs these tasks much faster than a human possibly could. In the context of web scraping, bots are used to extract data from websites, parse the data, and assemble it into a structured format where it can be presented in a useful form. Bots can perform many other actions as well, like submitting forms, setting up schedules, and connecting to databases. They can also do fun things like add friends to social networking sites like Twitter, Facebook, Google+, and others. A quick Internet search will show that many different bot tools are readily available for download free of charge. We won't go into the specifics of each vendor's bot application, but it's important to understand that they are out there and are very easy to use. Bot Detection So, now that we know what a bot is and what it does, how can we distinguish between malicious bot activity and harmless human activity? Well, the ASM is configured to check for some very specific activities that help it determine if the client source is a bot or a human. By the way, it's important to note that the ASM can accurately detect a human user only if clients have JavaScript enabled and support cookies. There are three different settings in the ASM for bot detection: Off, Alarm, and Alarm and Block. Obviously, if bot detection is set to "Off" then the ASM does not check for bot activity at all. The "Alarm" setting will detect bot activity and record attack data, but it will allow the client to continue accessing the website. The "Alarm and Block" setting will detect bot activity, record the attack data, and block the suspicious requests. These settings are shown in the screenshot below. Once you apply the setting for bot detection, you can then tune the ASM to begin checking for bots that are accessing your website. The bot detection utilizes four different techniques to detect and defend against bot activity. These include Rapid Surfing, Grace Interval, Unsafe Interval, and Safe Interval. Rapid Surfing detects bot activity by counting the client's page consumption speed. A page change is counted from the page load event to its unload event. The ASM configuration allows you to set a maximum number of page changes for a given time period (measured in milliseconds). If a page changes more than the maximum allowable times for the given time interval, the ASM will declare the client as a bot and perform the action that was set for bot detection (Off, Alarm, Alarm and Block). The default setting for Rapid Surfing is 5 page changes per second (or 1000 milliseconds). The Grace Interval setting specifies the maximum number of page requests the system reviews while it tries to detect whether the client is a human or a bot. As soon as the system makes the determination of human or bot it ends the Grace Interval and stops checking for bots. The default setting for the Grace Interval is 100 requests. Once the system determines that the client is valid, the system does not check the subsequent requests as specified in the Safe Interval setting. This setting allows for normal client activity to continue since the ASM has determined the client is safe (during the Grace Interval). Once the number of requests sent by the client reaches the value specified in the Safe Interval setting, the system reactivates the Grace Interval and begins the process again. The default setting for the Safe Interval is 2000 requests. This Safe Interval is nice because it lowers the processing overhead needed to constantly check every client request. If the system does not detect a valid client during the Grace Interval, the system issues and continues to issue the "Web Scraping Detected" violation until it reaches the number of requests specified in the Unsafe Interval setting. The Unsafe Interval setting specifies the number of requests that the ASM considers unsafe. Much like in the Safe Interval, after the client sends the number of requests specified in the Unsafe Interval setting, the system reactivates the Grace Interval and begins the process again. The default setting for the Unsafe Interval is 100 requests. The following figure shows the settings for Bot Detection and the values associated with each setting. Interval Timing The following picture shows a timeline of client requests and the intervals associated with each request. In the example, the first 100 client requests will fall into the Grace Interval, and during this interval the ASM will be determining whether or not the client is a bot. Let's say a bot is detected at client request 100. Then, the ASM will immediately invoke the Unsafe Interval and the next 100 requests will be issued a "Web Scraping Detected" violation. When the Unsafe Interval is complete, the ASM reverts back to the Grace Interval. If, during the Grace Interval, the system determines that the client is a human, it does not check the subsequent requests at all (during the Safe Interval). Once the Safe Interval is complete, the system moves back into the Grace Interval and the process continues. Notice that the ASM is able to detect a bot before the Grace Interval is complete (as shown in the latter part of the diagram below). As soon as the system detects a bot, it immediately moves into the Unsafe Interval...even if the Grace Interval has not reached its set threshold. Setting Thresholds As you can see from the timeline above, it's important to establish the correct thresholds for each interval setting. The longer you make the Grace Interval, the longer you give the ASM a chance to detect a bot, but keep in mind that the processing overhead can become expensive. Likewise, the Unsafe Interval setting is a great feature, but if you set it too high, your requesting clients will have to sit through a long period of violation notices before they can access your site again. Finally, the Safe Interval setting allows your users free and open access to your site. If you set this too low, you will force the system to cycle through the Grace Interval unnecessarily, but if you set it too high, a bot might have a chance to sneak past the ASM defense and scrape your site. Remember, the ASM does not check client requests at all during the Safe Interval. Also, remember the ASM does not perform web scraping detection on traffic from search engines that the system recognizes as being legitimate. If your web application has its own search engine, it's recommended that you add it to the system. Go to Security > Options > Application Security > Advanced Configuration > Search Engines and add it to the list (the ASM comes preconfigured with Ask, Bing, Google, and Yahoo already loaded). A Quick Test I loaded up a sample virtual web server (in this case it was a fictitious online auction site) and then configured the ASM to Alarm and Block bot activity on the site. Then, I fired up the iMacro plugin from Firefox to scrape the site. Using the iMacro plugin, I sent many client requests in a short amount of time. After several requests to the site, I received the response page shown below. You can see that the response page settings in the ASM are shown in the Firefox browser window when web scraping is detected. These settings can be adjusted in the ASM by navigating to Security > Application Security > Blocking > Response Pages. Well, thanks for coming back and reading about ASM bot detection. Be sure to swing by again for the final web scraping article where I will discuss session anomalies. I can't possibly think of anything that could be more fun than that!2KViews1like0CommentsWeb Scraping - Data Collection or Illegal Activity?
Web Scraping Defined We've all heard the term "web scraping" but what is this thing and why should we really care about it? Web scraping refers to an application that is programmed to simulate human web surfing by accessing websites on behalf of its "user" and collecting large amounts of data that would typically be difficult for the end user to access. Web scrapers process the unstructured or semi-structured data pages of targeted websites and convert the data into a structured format. Once the data is in a structured format, the user can extract or manipulate the data with ease. Web scraping is very similar to web indexing (used by most search engines), but the end motivation is typically much different. Whereas web indexing is used to help make search engines more efficient, web scraping is typically used for different reasons like change detection, market research, data monitoring, and in some cases, theft. Why Web Scrape? There are lots of reasons people (or companies) want to scrape websites, and there are tons of web scraping applications available today. A quick Internet search will yield numerous web scraping tools written in just about any programming language you prefer. In today's information-hungry environment, individuals and companies alike are willing to go to great lengths to gather information about all sorts of topics. Imagine a company that would really like to gather some market research on one of their leading competitors...might they be tempted to invoke a web scraper that gathers all the information for them? Or, what if someone wanted to find a vulnerable site that allowed otherwise not-so-free downloads? Or, maybe a less than honest person might want to find a list of account numbers on a site that failed to properly secure them. The list goes on and on. I should mention that web scraping is not always a bad thing. Some websites allow web scraping, but many do not. It's important to know what a website allows and prohibits before you scrape it. The Problem With Web Scraping Web scraping rides a fine line between collecting information and stealing information. Most websites have a copyright disclosure statement that legally protects their website information. It's up to the reader/user/scraper to read these disclosure statements and follow along legally and ethically. In fact, the F5.com website presents the following copyright disclosure: "All content included on this site, such as text, graphics, logos, button icons, images, audio clips, and software, including the compilation thereof (meaning the collection, arrangement, and assembly), is the property of F5 Networks, Inc., or its content and software suppliers, except as may be stated otherwise, and is protected by U.S. and international copyright laws." It goes on to say, "We reserve the right to make changes to our site and these disclaimers, terms, and conditions at any time." So, scraper beware! There have been many court cases where web scraping turned into felony offenses. One case involved an online activist who scraped the MIT website and ultimately downloaded millions of academic articles. This guy is now free on bond, but faces dozens of years in prison and $1 million if convicted. Another case involves a real estate company who illegally scraped listings and photos from a competitor in an attempt to gain a lead in the market. Then, there's the case of a regional software company that was convicted of illegally scraping a major database company's websites in order to gain a competitive edge. The software company had to pay a $20 million fine and the guilty scraper is serving three years probation. Finally, there's the case of a medical website that hosted sensitive patient information. In this case, several patients had posted personal drug listings and other private information on closed forums located on the medical website. The website was scraped by a media-research firm, and all this information was suddenly public. While many illegal web scrapers have been caught by the authorities, many more have never been caught and still run loose on websites around the world. As you can see, it's increasingly important to guard against this activity. After all, the information on your website belongs to you, and you don't want anyone else taking it without your permission. The Good News As we've noted, web scraping is a real problem for many companies today. The good news is that F5 has web scraping protection built into the Application Security Manager (ASM) of its BIG-IP product family. As you can see in the screenshot below, the ASM provides web scraping protection against bots, session opening anomalies, session transaction anomalies, and IP address whitelisting. The bot detection works with clients that accept cookies and process JavaScript. It counts the client's page consumption speed and declares a client as a bot if a certain number of page changes happen within a given time interval. The session opening anomaly spots web scrapers that do not accept cookies or process JavaScript. It counts the number of sessions opened during a given time interval and declares the client as a scraper if the maximum threshold is exceeded. The session transaction anomaly detects valid sessions that visit the site much more than other clients. This defense is looking at a bigger picture and it blocks sessions that exceed a calculated baseline number that is derived from a current session table. The IP address whitelist allows known friendly bots and crawlers (i.e. Google, Bing, Yahoo, Ask, etc), and this list can be populated as needed to fit the needs of your organization. I won't go into all the details here because I'll have some future articles that dive into the details of how the ASM protects against these types of web scraping capabilities. But, suffice it to say, ASM does a great job of protecting your website against the problem of web scraping. I'm sure as you studied the screenshot above you also noticed lots of other protection capabilities the ASM provides...brute force attack prevention, customized attack signatures, Denial of Service protection, etc. You might be wondering how it does all that stuff as well. Give us a little feedback on the topics you would like to see, and we'll start posting some targeted tech tips for you! Thanks for reading this introductory web scraping article...and, be sure to come back for the deeper look into how the ASM is configured to handle this problem.For more information, check out this video from Peter Silva where he discusses ASM botnet and web scraping defense.471Views0likes0Comments