bot detection
4 Topicstraffic flow between IPI, application security policy, bot detection, DoS protection, irule, and geolocation
I want to know how the traffic flow between IPI, application security policy, bot detection, DoS protection, irule, and Geolocation (using irule for Geolocation). I am using Global IPI (mean IPI does not attached to any VS) and have an irule for Geolocation and only have module ASM and LTM (No APM and AFM). I understand that irule can be arranged by the order. The application security policy, bot detection, DoS protection, irule are attached to VS. Here is what I understand the traffic flow. The traffic hits Global IPI -> reached VS for irules in order (including Geolocation, I always put Geolocation at first place) -> Application security policy -> DoS -> Bot detection. Is this correct? Or will application security policy , Dos, Bot detection happen at the same time? What is the best practice for Geolocation? Using an irule for Geolocation or using Geolocation in application security policy?751Views0likes3CommentsASM Proactive Bot - Referer Header
Hi All, I turned on ASM Proactive Bot detection and our analytics folks noticed that the Referer Header isn't carried through after the JS challenges are complete. By the time the client browser answers the JS challenges, it sees the requested URL as the Referer instead of Google (as in our specific case). Analytics would like to see Google as the Referer. Is there anyway to conserve the original Referer so that the backend sees it as an organic request? Thanks for any insight you have. Cheers, Mike315Views0likes1CommentiRule to log Proactive BotDefense to HSL?
We are running v12 LTM/ASM and have observed active botnet attacks. We enabled Proactive BotDefense; however as we found out from support tonight logging is not available until version 13.x. In reading other posts there is a way to log events via iRule. Has anyone successfully got this to work in version 12? We are very close to upgrading, but won't make in time with this event going on in the background. Any insight or samples are greatly appreciated! /jeff188Views0likes0CommentsWeb Scraping Configuration
We would like some clarification on the F5 Web Scraping Application Security that we can't seem to find. Does this block based on session or IP? If a bot is detected during the grace interval, and say we have the unsafe interval set to 100,000, shouldn't it block that IP for 100,000 requests following the detection? We are seeing that once the session is closed it allows that IP back through with another grace interval. The scraping we are receiving is intelligent enough to kill the session once it detects we blocked them, and then opens another session. So, in our event logs we see the same IPs listed multiple times back to back where they were blocked for say 11 requests, then just came right back through with another session. This isn't a desired configuration in our opinion. We were under the impression that if an IP was detected as a bot that it would be blocked for the subsequent unsafe interval we have set. We have tested this from an external connection by sending requests to get detected and blocked by the device, but once we opened a new session we were free and clear again. Is there a setting we need to change for our desired effect? We have looked through the documentation and don't see what would possibly need changed.396Views0likes3Comments