on
15-Mar-2017
21:00
- edited on
05-Jun-2023
22:41
by
JimmyPackets
Bots are everywhere. Some of them are nice, desirable bots; but many of them are not. By definition, a bot is a software application that runs automated tasks (scripts) over the Internet. The desirable ones include examples like Google bots crawling your website so that Google can know what information your site contains and then display your site’s URL in their list of search results. Most people want this…many people even pay huge money to make sure their site is listed in the top results on Google. Other bots, though, are not so good. The more malicious bots are used to attack targets…typically via a Distributed Denial of Service (DDoS) attack. When many bots are controlled by a central bot controller, they form a “botnet” and can be used to send massive amounts of DDoS traffic at a single target. We have seen malicious bot behavior many times, but a recent popular botnet attack was seen by the Mirai botnet against several targets. Let’s just say you didn’t want to be on the receiving end of that attack.
Needless to say, bot activity is something that needs to be monitored and controlled. On one hand, you want the good bots to access your site, but on the other hand you want the bad ones to stay away. The question is, “how do you know the difference?” Great question. And the unfortunate answer for many organizations is: “I have no idea.” The other harsh reality, by the way, is that many organizations have no idea that they have a bot problem at all…yet they have a big one. Well, the BIG-IP ASM includes several bot-defending features, and this article will outline a feature called “Proactive Bot Defense."
While the BIG-IP ASM has worked to detect bots for quite sometime now, it’s important to know that it has also been steadily updated to include more automatic defense features. The BIG-IP ASM uses many different approaches to defending against bad bots, to include things like: bot signatures, transactions-per-second based detection, stress-based detection, heavy URL protection, and CAPTCHA challenges. All of those approaches are manual in the sense that they require the BIG-IP ASM administrator to configure various settings in order to tune the defense against bad bots.
However, proactive bot defense automatically detects and prevents bad bots from accessing your web application(s). Here’s a picture of how it works:
After the initial request is finally sent to the server for processing, any future requests from that browser can bypass the JavaScript challenge because of the valid, signed, time stamped cookie the BIG-IP ASM holds for that valid browser. The BIG-IP ASM steps through all these actions in order to protect your web application from getting attacked by malicious bots. In addition to the JavaScript challenge, the ASM also automatically enables bot signatures and blocks bots that are known to be malicious. When you add up all these bot defense measures, you get what we call “Proactive Bot Defense.”
Many features of the BIG-IP ASM require you to build a security policy, but Proactive Bot Defense does not. It is configured and turned on in the DoS profile. To access the DoS profile from the configuration screen, navigate to Security > DoS Protection > DoS Profiles. Then, you will see the list of DoS profiles. Either click the name of an existing DoS profile, or create a new one in order to configure the DoS profile. Also, on the left menu, under Application Security, click General Settings, and make sure that Application Security is enabled.
Once you click Proactive Bot Defense, you will be able to configure the settings for the operating mode of the profile. You will have three options to choose when Proactive Bot Defense is implemented:
Cross-Origin Resource Sharing (CORS) is an HTML5 feature that enables one website to access the resources of another website using JavaScript within the browser. Specifically, these requests come from AJAX or CSS. If you enable Proactive Bot Defense and your website uses CORS, you should add the CORS URLs to the proactive bot URL whitelist.
Related to this, but slightly different, is the idea of "cross-domain requests." Sometimes a web application might need to share resources with another external website that is hosted on a different domain. For example, if you browse to www.yahoo.com, you might notice that the images and CSS arrive from another domain like www.yimg.com. Cross-domain requests are requests with different domains in the Host and Referrer headers. Because this is a different domain, the cookie used to verify the client does not come with the request, and the request could be blocked. You can configure this behavior by specifying the conditions that allow or deny a foreign web application access to your web application after making a cross-domain request. This feature is called cross-domain request enforcement.
You enable cross-domain request enforcement as part of the Allowed URL properties within a security policy. Then you can specify which domains can access the response generated by requesting this URL (the “resource”), and also configure how to overwrite CORS response headers that are returned by the web server.
There are three options for configuring cross-domain requests:
If you selected one of the two Allow configured domains options, you will need to add Related Site Domains that are part of your web site, and Related External Domains that are allowed to link to resources in your web site. You can type these URLs in the form /index.html (wildcards are supported).
While these options are great for cross-domain requests, they do not help with AJAX if "Access-Control-Allow-Credentials" was not set by the client-side code of the application. To solve the AJAX case, the administrator could choose from one of three options. They are:
The database variables mentioned in option #3 above are as follows:
dosl7.cors_font_urls
URLs (or wildcards) of CSS that use @font-face to request fonts from another domain. Both the CSS and the FONT URLs are required here.
dosl7.cors_ajax_urls
URLs (or wildcards) of HTML pages that use AJAX to send requests to other domains. Only the HTML URL is needed here, and not the URL of the CORS request.
Requests to these URLs get redirected, and the TSPD_101 cookie gets added to the query string. For the HTML URLs, this is displayed in the address bar of the browser. When the requests are sent from the BIG-IP to the back-end server, the additional query string gets stripped off.
@font-face
CSS in host1.com is requesting a font in host2.com:
@font-face { font-family: myfont; src: url('http://host2.com/t/cors/font/font.otf'); } h1 { font-family: myfont; color: maroon; }
To prevent the font request from being blocked, define using this command:
tmsh modify sys db dosl7.cors_font_urls value /t/cors/font/style.css,/t/cors/font/font.otf
AJAX
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://host2.com/t/cors/ajax/data.txt");
xhr.send();
To prevent the data.txt request from being blocked, define the HTML that contains the JavaScript using the following command:
tmsh modify sys db dosl7.cors_ajax_urls value /t/cors/ajax/,/t/cors/ajax/index.html
One more thing to note about AJAX requests: the cookie that is set is valid for 10 minutes by default (5 initial minutes plus the configured Grace Period). Single Page Applications will send AJAX requests well past this cookie expiration period and these requests will be blocked.
In BIG-IP version 13.0.0 and up, there is support for Single Page Applications. You can simply check the checkbox in the General section of the dos profile. Enabling this option causes JavaScript to be injected into every HTML response, and allows supporting these requests.
Another configuration item to consider is what’s called the “Grace Period.” This is the amount of time the BIG-IP ASM waits before it begins bot detection. The default value is 300 seconds, but this can be changed in the DoS profile settings along with the other items listed above. The Grace Period allows web pages (including complex pages with images, JavaScript, CSS, etc) the time to be recognized as non-bots, receive a signed cookie, and completely load without unnecessarily dropping requests. The Grace Period begins after the signed cookie is renewed, a change is made to the configuration, or after proactive bot defense starts as a result of a detected DoS attack. During the Grace Period, the BIG-IP ASM will not block anything, so be sure to set this value as low as possible while still allowing enough time for a complete page to load.
The last thing I’ll mention is that, by default, the ASM blocks requests from highly suspicious browsers and displays a default CAPTCHA (or visual character recognition) challenge to browsers that could be suspicious. You can change the Block requests from suspicious browsers setting by clearing either Block Suspicious Browsers or Use CAPTCHA.
There are many other bot defense mechanisms available in the BIG-IP ASM, and other articles will cover those, but I hope this article has helped shed some light on the details of Proactive Bot Defense. So, get out there and turn this thing on…it’s easy and it provides automatic protection!
Hi,
 
Great article! I am a bit confused by description of options in CORS section (Allow configured domains; validate in bulk, Allow configured domains; validate upon request). To my understanding those are different from description in build in help on BIG-IP v13. On BIG-IP all descriptions starts with the same sentence as in first option (Allow all requests). Maybe I am wrong but for me CORS configuration on BIG-IP controls source domains that can send request to VS on BIG-IP. Mentioned descriptions in article seems to suggest that it's control for domains to which request can be send from content served by VS on BIG-IP: "This setting allows requests to other related internal or external domains that are configured in this section and validates the related domains in advance." Am I wrong here?
 
Another description that is a bit unclear is Grace Period. Did it changed in v13? In old article (https://devcentral.f5.com/s/articles/more-web-scraping-bot-detection) description of Grace Period is like that:
 
"If, during the Grace Interval, the system determines that the client is a human, it does not check the subsequent requests at all (during the Safe Interval). Once the Safe Interval is complete, the system moves back into the Grace Interval and the process continues.
 
Notice that the ASM is able to detect a bot before the Grace Interval is complete (as shown in the latter part of the diagram below). As soon as the system detects a bot, it immediately moves into the Unsafe Interval...even if the Grace Interval has not reached its set threshold."
 
Definition from this article suggest that there is no bot detection performed at all during Grace Period - quite opposite to info from old article. Piotr
 
Great article John.!!!!
I have a question for you. In case the user deletes the proactive bot defense cookie (TSPD_101), will the ASM re-inject the javascript? What are the possible issues if i enable csrf - web scraping and proactive bot defense?
Best Regards,
SM
@Piotr and @zack, thanks for the questions about the "Grace Period" and the "Grace Interval". Admittedly, these two terms are very similar, but they are not exactly the same thing.
To be specific, the "Grace Interval" is a setting in the ASM for Anomaly Detection >> Web Scraping, and the "Grace Period" is a setting in the DoS profile of the ASM.
The "Grace Interval" is measured in number of requests (default is 100), and this is the maximum number of page requests that the ASM will review while it determines is the client is a bot or a human. During this period, the ASM can figure out if the client is bot or human, and as soon as it does, it switches to either the "Safe Interval" if it determines the client is human, or the "Unsafe Interval" if it determines the client is a bot. So, it's true to say that the ASM is trying to detect if the client is a bot during this "Grace Interval" number of requests.
Now, to "Grace Period"...this is a setting in the "Proactive Bot Defense" section of the DoS profile in the ASM, and it is measured in seconds, not requests. This "Grace Period" is the amount of time the ASM allows a client to load web pages (with both HTML and non-HTML) without being blocked. Here's when the "Grace Period" starts:
During the "Grace Period" the ASM is not checking to see if the client is a bot or not.
I hope this helps!
@SM, great questions about Proactive Bot Defense.
I hope this helps!
@john, thank you for your reply. If you set the proactive bot defense to "Always" and the Grace Period is 300 seconds, the requests will not be blocked during the grace period but will the f5 inject the javascript in each request? If yes, there will be an "endless" loop for 300 seconds. Right?
Thank you in advance.
Best Regards,
SM
@SM,
During the grace period the ASM will:
@John
Thank you for your reply. I need some more help. What if the user does not respond with a valid cookie during 300 seconds? The proactive bot defense has been set to "Always".
Many thanks in advance!!
@John, can we say if TSPD_101 was set/found in http request, then it indicates the "proactive bot defense" is enabled?
Because I have a ASM (12.1.2 HF1) with no DOS profile assigned to VS, however I can see TSPD_101 cookie was set after a bunch of requests( for example, hitting F5 keys fast)
What feature may cause this cookie was set? Also, would be appreciate if there is any resource can explain ASM javascript injection behavior...
Thanks!
@SM, sorry for the delayed response...if a user does not respond with a valid cookie during the grace period, then it will either be blocked or responded with a proactive challenge depending on the page/request qualification.
@zack, great question! The presence of the TSPD_101 cookie does not automatically indicate that Proactive Bot Defense has been enabled. Other security features on the BIG-IP also use this same cookie. I can look into writing more on the ASM javascript injection behavior as well...thanks for the feedback!
@John I know this thread is a little bit old for a new question but I found it fits well with questions about Proactive Bot Defense.
I tried to understand this option: Block requests from suspicious browsers
So... what's a suspicious browser on earth? Does asm check user-agent header to differentiate a suspicious browser with a good one? And what is "highly suspicious browser" and "moderately suspicious browser" ?
I tried to running a test based on this feature but unfortunately can't find a way/tool to do that. Could you please advise any method can be used to test this feature?
Thanks again for the sharing. That makes devcentral a really good place!
Hi zack...great questions! As you mentioned, the ASM can detect if a browser is "suspicious" but the question, of course, is "how does it do that?"
While the very specific details are in the secret sauce, I can tell you that suspicious browser checks are related to:
All of these checks have internal scoring mechanisms which ultimately lead to a value. If the value crosses a certain threshold, the browser is considered suspicious; if it crosses an even larger threshold, it is considered malicious.
I hope this helps!
John,
Using ASM on BIG-IP version 12.1.2 Build 1.0.271 Hotfix HF1, it looks to me like the TPS protection doesn't engage unless we have Proactive Bot Defense set to at least an operation mode of "During Attacks". My problem with this is protecting our REST services that work with native mobile clients that will not pass JavaScript/cookie challenges.
Is the only solution there to use the F5 SDK with the native mobile application so that it will interact with those challenges? If so, is there any solution for a financial institution who does not build their own mobile applications but would like to utilize ASM TPS DoS protection?
What is the expected behavior if Proactive BOT is set to always and a request comes from a benign Bot? What I have seen is that something with a valid signature for say Google Bot will still get the JS challenge and ultimately will get blocked. I would expect that if the bot has been identified via the signatures that proactive would not kick in. Am I missing something? I've seen this on 12.1 and 13.1
Hello everybody,
I have an issue here with a mobile app, I set a DOS profile with Proactive Bot Defense activated on a virtual server, within this virtual server we have a webpage and it has several URL´s which matches with different real servers, this correspondence is done by an apache web server (This logic will be do by an irule as soon as we fix this issue), there is an URL: /appexampleservices and this URL contains the application for the mobile app, I mean the app from the Play store, the app is designed to ask for the content to this URL: https://customer.com/appexampleservices Sometimes the app stops to answer, when it happen I go to check in the analytics profile which are the http responses and the URL answers a 307, this commonly happen each monday (I think maybe it happen because while the weekend the number of TPS and traffic decreases considerably). I did something to test when the issue was happening.
1.- To isolate the traffic for this specific URL I wrote a little irule to redirect all traffic for this URL to the real servers bypassing the pool of apache web servers:
when HTTP_REQUEST { if { [class match [HTTP::uri] starts_with "APPSERVICESURLCLASS"] } { log "Request: [HTTP::uri] from [IP::client_addr]" pool POOL-APPSERVICES } }
2.- In a Firefox browser in my desktop computer connected by WIFI to the internet, I asked for the URL: https://customer.com/appexampleservices. Then I checked the statistics of the POOL-APPSERVICES the connection was successful and I saw traffic in the pool.
3.- Then I tried to access with the android app with my smartphone connected to the same WIFI (same ip address source as the desktop) the request to the https://customer.com/appexampleservices is do by default in the app. But this time the request was not successful, I reviewed the statistics in the POOL-APPSERVICES and this time my connection by app didn't appear, I didn't see traffic. Although in the logs I saw my connection (By the sentence log "Request: [HTTP::uri] from [IP::client_addr]") but it never reached the pool.
Then the customer decided to disable the DOS profile and the ASM policy and it worked pretty good. In fact I used to think before this test that the 307 response came from the real server. Then I add the URL /appexampleservices to the URL Whitelist and I activated again the DOS profile and then it worked fine.. While today when it broken again and send that awful 307 response code to the app. The customer asked me to disable all security features on the BIGIP. My question is, how can I safely bypass PBD exclusively for this URL because all other URL´s works fine. Plus as I said, each monday this issue constantly happen, maybe because the traffic increase more than the 800% for the weekends inactivity. Is there a way to deal with this?? I would appreciate your help guys Thanks a lot!!!
By the way, this is the general configuration set in the DOS profile:
I am new in F5 security tools, my main experience is in LTM, GTM and iRules so this is breaking my head.
I would appreciate a lot your help guys! Best regards
Hello Ivan
 
You are hitting the problem which is fixed by upgrading to the advanced ASM + mobile SDK.
 
Your issue as I see it (warning assumptions are being made 🙂 ) is that the mobile app doesn't handle the javascript challenge very well - it is not a browser so you can't assume that will work.
 
Take a look at this wiki page: https://clouddocs.f5.com/api/irules/BOTDEFENSE_ACTION.html
 
Here you can disable PBD for specific URL's based on whatever logic you like. If you can detect that it is the mobile app coming in by looking at headers etc. or other behaviours then use that to disable the PBD.
 
But the right solution will be to compile the app with the F5 SDK into it so it can answer the PBD challenge correctly. This will limit your attack surface and give the full feature set of ASM.
 
Hope this makes sense.
 
Hi Inxgeek,
Well I understand I have 2 ways to deal with this issue, bypass the traffic adding all the URL´s used by the app in all whitelist of all settings available in the DOS profile. Or upgrade to the advanced ASM plus buy the AppDome fusion develop. I have a single and very easy question. To upgrade to advanced ASM I just need to upgrade to 13.x version or we need an additional add on license? Considering this device has a better license, is it necessary to buy an add-on?
Thanks a lot!! Best Regards
Hi Ivan
To upgrade to adv. ASM you need to purchase a license and also by the subscription/license for the mobile SDK.
You can whitelist all the url's but you then also more or less loose the DoS feature where it should help you out. That is why I would try to get a better sense of the mobile app and then only disable if you can "taste" it is legitimate traffic via an iRule.
Hi John!
Your description sounds as if Javascript only gets injected if proactive bot defense is active in the DoS profile. Could it be that in version 13.1.x Javascript gets injected even if proactive bot defense is set to "Off" in the Application Security tab of the DoS profile?
By the way, the paragraph "BIG-IP Configuration" states that the menu containing Application Security is to the left whereas in version 13.1.x it is at the top of the screen, under the breadcrumb trail.
Great question @gscholz! I need to check the specifics of the v13.1.x and see about the JavaScript injection. It certainly could be true that another part of the ASM is injecting JavaScript even if the Proactive Bot Defense (PBD) is set to "Off". The PBD feature isn't the only one that sends JavaScript challenges. I have 14.x running on my lab right now, but I can load up 13.1.x and see if I can find the answer on this. Also, to your point, I should probably update this article (or write a new series) using the latest version. Thanks again for the great question!
Hi John,
It would be great if you could write updated version, as far as I know in 14+ PBD went through major overhaul and at last interface wise it it's very different. Friend of mine is really struggling to figure out this new incarnation. He as well reported that a lot of testing and manual tunning is still necessary as PBD seems to be very aggressive in default config and can block plenty of legitimate users from accessing site. One example is enabling Developer Tools in browser - the requests are immediately blocked (I can't recall which signature is responsible). Seems as well that after update old PBD is converted to new profile, after conversion practically everything is blocked.
Piotr