series-http-brute-force-mitigation-playbook
7 TopicsHTTP Brute Force Mitigation Playbook: BIG-IP LTM Mitigation Options for HTTP Brute Force Attacks - Chapter 3
HTTP Brute Force Attacks can be mitigated using BIG-IP LTM features. It could be a straightforward rejection of traffic from a specific source IP, network, geolocation, HTTP request properties or monitoring the number requests from a certain source and unique characteristic and rate limiting by dropping/rejecting requests exceeding a defined threshold. Prerequisites Managing the BIGIP configuration requires Administrator access. Ensure access to Configuration Utility (Web GUI) and SSH is available. These management interfaces will be helpful in configuring, verifying and troubleshooting on the BIGIP.Having access to serial console output of the BIGIP is also helpful. Local Traffic Manager (LTM) and Application Visibility and Reporting (AVR) license are required to use the related features. Prevent traffic from a Source IP or Network As demonstrated on the Data gathering chapter for iRules, LTM Policy and of F5 AVR, it is possible that a specific IP address or a specific network may be sending suspicious and malicious traffic. One of the common way to limit access to a HTTP Virtual Server is to either define a whitelist or blacklist of IP addresses. HTTP Brute Force Attacks on a Virtual Server can be mitigated by blocking a suspicious IP address or network. These can be done thru iRules, LTM Policy or Network Packet filter. Note that when blocking source IPs or networks, it is possible that the source IP is a proxy server and proxies request from internal clients and blocking it may have unintentional blocking of legitimate clients. Monitor traffic that are getting blocked and make necessary adjustments to the related configuration. The diagram below shows the packet processing path on a BIG-IP. Notice it also shows reference to Advance Firewall Manager (AFM) packet path. https://techdocs.f5.com/content/dam/f5/kb/global/solutions/K31591013_images.html/2018-0613%20AFM%20Packet%20Flow.jpg Mitigation: LTM Packet Filter On the left side of the BIG-IP packet processing path diagram, we can see the Ingress section and if the packet information is not in the Hardware Acceleration ePVA (Packet Velocity Asic) of the BIG-IP, it will be checked against the packet filter. Thus, after determining that an IP address or a Network is suspicious and/or malicious based on gathered data from either the LTM Policy/iRules or AVR or external monitoring tools, a packet filter can be created to block these suspected malicious traffic sources. Packet Filter(ing) can be enabled in the Configuration Utility, Network››Packet Filters : General Packet filter rules can be configured at Network››Packet Filters : Rules Sample Packet Filter Configuration: Packet filter configuration to block a specific IP address with the reject action and have logging enabled Packet filter configuration to block a Network with the reject action An Existing Packet filter rule Packet Filter generated logs can be reviewed at System››Logs : Packet Filter This log shows an IP address was rejected by a packet filter rule Mitigation: LTM Policy LTM Policy can be configured to block a specified IP address or from a iRule Datagroup. Sample LTM Policy: LTM policy tmsh output: Tip: TMOS shell (tmsh) command 'tmsh load sys config from-terminal merge' can be used to quickly load the configuration. Sample: root@(sec8)(cfg-sync Standalone)(Active)(/Common)(tmos)# load sys config from-terminal merge Enter configuration. Press CTRL-D to submit or CTRL-C to cancel. LTM policy will block specified IP address: ltm policy block_source_ip { last-modified 2019-02-20:22:55:25 requires { http tcp } rules { block_source_ip { actions { 0 { shutdown connection } 1 { log write facility local0 message "tcl:IP [IP::client_addr] is blocked by LTM Policy" priority info } } conditions { 0 { tcp address matches values { 172.16.7.31 } } } } } status published strategy first-match } LTM policy will block specified IP address in defined iRule Datagroup: root@(sec8)(cfg-sync Standalone)(Active)(/Common)(tmos)# list ltm policy block_source_ip ltm policy block_source_ip { last-modified 2019-12-01:14:40:57 requires { http tcp } rules { block_source_ip { actions { 0 { shutdown connection } 1 { log write facility local0 message "tcl:IP [IP::client_addr] is blocked by LTM Policy" priority info } } conditions { 0 { tcp address matches datagroup malicious_ip_dg } } } } status published strategy first-match } malicious_ip_dg is an iRule Datagroup where the IP address is defined Apply the LTM Policy to the Virtual Server that needs to be protected. Mitigation:iRule to block an IP address Using iRule to block an IP address can be done in different stages of the BIG-IP packet processing. The sample iRule will block the matched IP address during the FLOW_INIT event. FLOW_INIT definition: This event is triggered (once for TCP and unique UDP/IP flows) after packet filters, but before any AFM and TMM work occurs. https://clouddocs.f5.com/api/irules/FLOW_INIT.html Diagram Snippet from 2.1.9. iRules HTTPS Events. FLOW_INIT event happens after packet filter events. If an IP address is identified as malicious, blocking it earlier before further processing would save CPU resource as iRules processing are resource intensive. Additionally, if the blocking of an IP address can be done using LTM packet filter, or LTM policy, use it instead of iRules approach. https://f5-agility-labs-irules.readthedocs.io/en/latest/class1/module1/iRuleEventsFlowHTTPS.html Sample iRule: when FLOW_INIT { set ipaddr [IP::client_addr] if { [class match $clientip equals malicious_ip_dg] } { log local0. "Attacker IP [IP::client_addr] blocked" #logging can be removed/commented out if not required drop } } malicious_ip_dg is an iRule Datagroup where the IP address is defined Sample iRule is from K43383890: Blocking IP addresses using the IP geolocation database and iRules. there are more sample iRules in the referenced F5 Knowledge Article. https://support.f5.com/csp/article/K43383890 Apply the iRule to the Virtual Server that needs to be protected. Mitigation: Rate Limit based on IP address using iRules Common scenario during increase of connection when a suspected brute force attack on a Virtual Server with HTTP application is looking for options to rate limit connections to it. Using iRule to rate limit connection based IP address is possible. It also offers levels of control and additional logic should it be needed. Here is a sample iRule to Rate limit IP addresses. when RULE_INIT { # Default rate to limit requests set static::maxRate 15 # Default rate to set static::warnRate 12 # During this many seconds set static::timeout 1 } when CLIENT_ACCEPTED { # Increment and Get the current request count bucket set epoch [clock seconds] set currentCount [table incr -mustexist "Count_[IP::client_addr]_${epoch}"] if { $currentCount eq "" } then { # Initialize a new request count bucket table set "Count_[IP::client_addr]_${epoch}" 1 indef $static::timeout set currentCount 1 } # Actually check for being over limit if { $currentCount >= $static::maxRate } then { log local0. "ERROR: IP:[IP::client_addr] exceeded ${static::maxRate} requests per second. Rejecting request. Current requests: ${currentCount}." event disable all drop } elseif { $currentCount > $static::warnRate } then { log local0. "WARNING: IP:[IP::client_addr] exceeded ${static::warnRate} requests per second. Will reject at ${static::maxRate}. Current requests: ${currentCount}." } log local0. "IP:[IP::client_addr]: currentCount: ${currentCount}" } Attach the iRule to Virtual Server that needs to be protected. HTTP information from sample requests In the previous chapter "Bad Actor Behavior and Gathering Statistics using BIG-IP LTM Policies and iRules and BIG-IP AVR", some HTTP information will be available via AVR statistics and some may be gathered thru LTM policy or iRules where logs were generated when HTTP requests are received on a F5 Virtual Server which has the iRule or LTM Policy or the HTTP Analytics profile is applied to. These logs are typically logged in /var/log/ltm as normally configured in the irule "log local0." statements or in LTM policy, by default. In the course of troubleshooting and investigation, a customer/incident analyst may decide on what HTTP related information they will consider as malicious or undesirable. In the following sample iRule and LTM Policy mitigation, HTTP related elements were used. Typical HTTP information from the sample request that are used are the HTTP User-Agent header or a HTTP parameter. Other HTTP information can be used as well such as other HTTP headers. Mitigation: Prevent a specific HTTP header value During HTTP Brute Force attacks, HTTP header User-Agent value is often what an incident analyst will review and prevent traffic based on its value, where, a certain user-agent value will be used by automated bots that launches the attack. Sample Rule and LTM Policy to block a specific User-Agent root@(asm6)(cfg-sync Standalone)(Active)(/Common)(tmos)# list ltm policy Malicious_User_Agent ltm policy Malicious_User_Agent { last-modified 2019-12-04:17:30:38 requires { http } rules { block_UA { actions { 0 { shutdown connection } 1 { log write facility local0 message "tcl:the user agent [HTTP::header User-Agent] from [IP::client_addr] is blocked" priority info } } conditions { 0 { http-header name User-Agent values { "Mozilla/5.0 (A-malicious-UA)" } } } } } status published strategy first-match } logs generated by LTM Policy in /var/log/ltm Jan 16 13:11:06 sec8 info tmm3[11305]: [/Common/Malicious_User_Agent/block_UA]: the user agent Mozilla/5.0 (A-malicious-UA) from 172.16.10.31 is blocked Jan 16 13:11:06 sec8 info tmm5[11305]: [/Common/Malicious_User_Agent/block_UA]: the user agent Mozilla/5.0 (A-malicious-UA) from 172.16.10.31 is blocked Jan 16 13:11:06 sec8 info tmm7[11305]: [/Common/Malicious_User_Agent/block_UA]: the user agent Mozilla/5.0 (A-malicious-UA) from 172.16.10.31 is blocked Mitigation: Rate Limit a HTTP Header with a unique value During a HTTP Brute Force Attack, there may be instances in the attack traffic that a HTTP Header may have a certain value. If the HTTP Header value is being repeatedly used and appears to be an automated request, an iRule can be used to monitor the value of the HTTP header and be rate limited. Example: when HTTP_REQUEST { if { [HTTP::header exists ApplicationSpecificHTTPHeader] } { set DEBUG 0 set REQ_TIMEOUT 60 set MAX_REQ 3 ## set ASHH_ID [HTTP::header ApplicationSpecificHTTPHeader] set requestCnt [table lookup -notouch -subtable myTable $ASHH_ID] if { $requestCnt >= $MAX_REQ } { set remtime [table timeout -subtable myTable -remaining $ASHH_ID] if { $DEBUG > 0 } { log local0. "Dropped! wait for another $remtime seconds" } reject #this could also be changed to "drop" instead of "reject" to be more stealthy } elseif { $requestCnt == "" } { table set -subtable myTable [HTTP::header ApplicationSpecificHTTPHeader] 1 $REQ_TIMEOUT if { $DEBUG > 0 } { log local0. "Hit 1: Passed!" } } elseif { $requestCnt < $MAX_REQ } { table incr -notouch -subtable myTable [HTTP::header ApplicationSpecificHTTPHeader] if { $DEBUG > 0 } { log local0. "Hit [expr {$requestCnt + 1}]: Passed!" } } } } In this example iRule, the variable MAX_REQ has a value of 3 and means will limit the request from the HTTP Header -ApplicationSpecificHTTPHeader - with specific value to 3 requests. irule logs generated in /var/log/ltm Jan 16 13:04:39 sec8 info tmm3[11305]: Rule /Common/rate-limit-specific-http-header <HTTP_REQUEST>: Hit 1: Passed! Jan 16 13:04:39 sec8 info tmm5[11305]: Rule /Common/rate-limit-specific-http-header <HTTP_REQUEST>: Hit 2: Passed! Jan 16 13:04:39 sec8 info tmm7[11305]: Rule /Common/rate-limit-specific-http-header <HTTP_REQUEST>: Hit 3: Passed! Jan 16 13:04:39 sec8 info tmm6[11305]: Rule /Common/rate-limit-specific-http-header <HTTP_REQUEST>: Dropped! wait for another 60 seconds Jan 16 13:04:39 sec8 info tmm[11305]: Rule /Common/rate-limit-specific-http-header <HTTP_REQUEST>: Dropped! wait for another 60 seconds Jan 16 13:04:39 sec8 info tmm2[11305]: Rule /Common/rate-limit-specific-http-header <HTTP_REQUEST>: Dropped! wait for another 60 seconds Sample curl command to test the iRule. Notice the value of the ApplicationSpecificHTTPHeader HTTP header. for i in {1..50}; do curl http://172.16.8.86 -H "ApplicationSpecificHTTPHeader: couldbemaliciousvalue"; done Mitigation: Rate Limit a username parameter from HTTP payload Common HTTP Brute Force attack scenario involves credentials being tried repeatedly. In this sample iRule, the username parameter from a HTTP POST request payload can be observed for a HTTP login url and if the username is used multiple times and exceed the defined maximum requests in a defined time frame, the connection will be dropped. when RULE_INIT { # The max requests served within the timing interval per the static::timeout variable set static::maxReqs 4 # Timer Interval in seconds within which only static::maxReqs Requests are allowed. # (i.e: 10 req per 2 sec == 5 req per sec) # If this timer expires, it means that the limit was not reached for this interval and # the request counting starts over. Making this timeout large increases memory usage. # Making it too small negatively affects performance. set static::timeout 2 } when HTTP_REQUEST { if { ( [string tolower [HTTP::uri]] equals "/wackopicko/users/login.php" ) and ( [HTTP::method] equals "POST" ) } { HTTP::collect [HTTP::header Content-Length] } } when HTTP_REQUEST_DATA { set username "unknown" foreach x [split [string tolower [HTTP::payload]] "&"] { if { [string tolower $x] starts_with "token=" } { log local0. "login parameters are $x" set username [lindex [split $x "="] 1] set getcount [table lookup -notouch $username] if { $getcount equals "" } { table set $username "1" $static::timeout $static::timeout # Record of this session does not exist, starting new record # Request is allowed. } elseif { $getcount < $static::maxReqs } { log local0. "Request Count for $username is $getcount" table incr -notouch $username # record of this session exists but request is allowed. } elseif { $getcount >= $static::maxReqs } { drop log local0. "User $username exceeded login limit current count:$getcount from [IP::client_addr]:[TCP::client_port]" } else { #log local0. "User $username attempted login from [IP::client_addr]:[TCP::client_port]" } } } } logs generated in /var/log/ltm Jan 16 12:34:05 sec8 info tmm7[11305]: Rule /Common/post_request_username <HTTP_REQUEST_DATA>: login parameters are username=!@%23$%25 Jan 16 12:34:05 sec8 info tmm7[11305]: Rule /Common/post_request_username <HTTP_REQUEST_DATA>: User !@%23$%25 exceeded login limit current count:5 from 172.16.10.31:57128 Jan 16 12:34:05 sec8 info tmm1[11305]: Rule /Common/post_request_username <HTTP_REQUEST_DATA>: login parameters are username=!@%23$%25 Jan 16 12:34:05 sec8 info tmm1[11305]: Rule /Common/post_request_username <HTTP_REQUEST_DATA>: User !@%23$%25 exceeded login limit current count:5 from 172.16.10.31:57130 Additional reference: lindex - Retrieve an element from a list https://www.tcl.tk/man/tcl8.4/TclCmd/lindex.htm Prevent traffic source based on Behavior Mitigation: TLS Fingerprint In the reference Devcentral Article, https://devcentral.f5.com/s/articles/tls-fingerprinting-a-method-for-identifying-a-tls-client-without-decrypting-24598, it was demonstrated that clients using certain TLS fingerprints can be identified. In a HTTP brute force attack, attacking clients may have certain TLS fingerprint that can be observed and be later on, rate limited or dropped. TLS fingerprint can be gathered and used to manually or dynamically prevent malicious and suspicious clients coming from certain source IPs from accessing the iRule protected Virtual Server. The sample TLS Fingerprint Rate Limiting and TLS Fingerprint proc iRules (see HTTP Brute Force Mitigation: Appendix for sample iRule and other related configuration) works to identify, observe and block TLS fingerprints that are considered malicious based on the amount of traffic it sent. The TLS Fingerprinting Proc iRule extracts the TLS fingerprint from the client hello packet of the incoming client traffic which is unique for certain client devices. The TLS Fingerprint Rate Limiting iRule checks a TLS fingerprint if it is an expected TLS fingerprint or is considered malicious or is suspicious. The classification of an expected or malicious TLS fingerprint is done thru LTM rule Data Group. Example: Malicious TLS Fingerprint Data Group ltm data-group internal malicious_fingerprintdb { records { 0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102 { data curl-bot } } type string } In this example Malicious TLS Fingerprint Data Group, the defined fingerprint may be included manually as decided by a customer/analyst as the TLS fingerprint may have been observed to be sending abnormal amount of traffic during a HTTP brute force event. Expected / Good TLS Fingerprint Data Group ltm data-group external fingerprint_db { external-file-name fingerprint_db type string } System ›› File Management : Data Group File List ›› fingerprint_db Properties Namefingerprint_dbPartition / PathCommonData Group Name fingerprint_db TypeStringKey / Value Pair Separator:= sample TLS signature: signatures:#"0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102" := "User-Agent: curl-bot", Taking the scenario where a TLS fingerprint is defined in the malicious fingerprint data group, it will be actioned as defined in the TLS fingerprint Rate Limiting iRule. If a TLS fingerprint is neither malicious or expected, the TLS fingerprint Rate Limiting iRule will consider it suspicious and be rate limited should certain number of request is exceeded from this particular TLS fingerprint and IP address combination. Here are example logs generated by the TLS Fingerprint Rate Limiting iRule. Monitor the number of request sent from the suspicious TLS fingerprint and IP address combination. from the generated log, review the "currentCount" Dec 16 16:36:58 sec8 info tmm1[11545]: Rule /Common/fingerprintTLS-irule <CLIENT_DATA>: fingerprint:172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102: currentCount: 14 Dec 16 16:36:58 sec8 info tmm7[11545]: Rule /Common/fingerprintTLS-irule <CLIENT_DATA>: fingerprint:172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102: currentCount: 15 The HTTP User-Agent header value is included in the log to have a record of the TLS fingerprint and the HTTP User-Agent sending the suspicious traffic. This can later be used to define the suspicious TLS fingerprint and the HTTP User-Agent as a malicious fingerprint. Dec 16 16:36:58 sec8 info tmm1[11545]: Rule /Common/fingerprintTLS-irule <HTTP_REQUEST>: WARNING: suspicious_fingerprint: 172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102: User-Agent:curl/7.47.1 exceeded 12 requests per second. Will reject at 15. Current requests: 14. Dec 16 16:36:58 sec8 info tmm7[11545]: Rule /Common/fingerprintTLS-irule <HTTP_REQUEST>: WARNING: suspicious_fingerprint: 172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102: User-Agent:curl/7.47.1 exceeded 12 requests per second. Will reject at 15. Current requests: 15. The specific TLS fingerprint and IP combination is monitored and as it exceeds the defined request per second threshold in the TLS Fingerprint Rate Limiting iRule, further attempt to initiate a TLS handshake with the protected Virtual Server will fail. The iRule action in this instance is "drop". This will cause the connection to stall on the client side as the BIG-IP will not be sending any further traffic back to the suspicious client. Dec 16 16:36:58 sec8 info tmm1[11545]: Rule /Common/fingerprintTLS-irule <CLIENT_DATA>: ERROR: fingerprint:172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102 exceeded 15 requests per second. Rejecting request. Current requests: 16. Dec 16 16:36:58 sec8 info tmm1[11545]: Rule /Common/fingerprintTLS-irule <CLIENT_DATA>: fingerprint:172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102: currentCount: 16 Dec 16 16:40:04 sec8 warning tmm7[11545]: 01260013:4: SSL Handshake failed for TCP 172.16.7.31:24814 -> 172.16.8.84:443 Dec 16 16:40:04 sec8 info tmm7[11545]: Rule /Common/fingerprintTLS-irule <CLIENT_DATA>: ERROR: fingerprint:172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102 exceeded 15 requests per second. Rejecting request. Current requests: 17. Dec 16 16:40:04 sec8 info tmm7[11545]: Rule /Common/fingerprintTLS-irule <CLIENT_DATA>: fingerprint:172.16.7.31_0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102: currentCount: 17 Dec 16 16:40:04 sec8 warning tmm7[11545]: 01260013:4: SSL Handshake failed for TCP 172.16.7.31:35509 -> 172.16.8.84:443 If a TLS fingerprint is observed to be sending abnormal amount of traffic during a HTTP brute force event, this TLS fingerprint may be included manually as decided by a customer/analyst in the Malicious TLS Fingerprint Data Group. In our example, this is the malicious_fingerprintdb Data group. from the reference observed TLSfingerprint, an entry in the data group can be added. String: 0301+0303+0076+C030C02CC028C024C014C00A00A3009F006B006A0039003800880087C032C02EC02AC026C00FC005009D003D00350084C02FC02BC027C023C013C00900A2009E0067004000330032009A009900450044C031C02DC029C025C00EC004009C003C002F00960041C012C00800160013C00DC003000A00FF+1+00+000B000A000D000F3374+00190018001600170014001500120013000F00100011+060106020603050105020503040104020403030103020303020102020203+000102 Value: malicious-client Sample Data group in edit mode to add an entry: Mitigation: Prevent based on Geolocation It is possible during a HTTP Brute Force Attack that the source of the attack traffic is from a certain Geolocation. Attack traffic can be easily dropped from unexpected Geolocation thru an irule. The FLOW_INIT event is triggered when a packet initially hits a Virtual Server. be it UDP or TCP traffic.During an attack the source IP and geolocation information can be observed using the sample iRule and manually update the reference Data Group with country code where the attack traffic is sourcing from. Example: Unexpected Geolocation (Blacklist) iRule: when FLOW_INIT { set ipaddr [IP::client_addr] set clientip [whereis $ipaddr country] #logging can be removed/commented out if not required log local0. "Source IP $ipaddr from $clientip" if { [class match $clientip equals unexpected_geolocations] } { log local0. "Attacker IP detected $ipaddr from $clientip: Drop!" #logging can be removed/commented out if not required drop } } Data Group: root@(sec8)(cfg-sync Standalone)(Active)(/Common)(tmos)# list ltm data-group internal unexpected_geolocations ltm data-group internal unexpected_geolocations { records { KZ { data Kazakhstan } } type string } Generated log in /var/log/ltm: Dec 16 21:21:03 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Source IP 5.188.153.248 from KZ Dec 16 21:21:03 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Attacker IP detected 5.188.153.248 from KZ: Drop! Dec 16 21:21:04 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Source IP 5.188.153.248 from KZ Dec 16 21:21:04 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Attacker IP detected 5.188.153.248 from KZ: Drop! Dec 16 21:21:06 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Source IP 5.188.153.248 from KZ Dec 16 21:21:06 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Attacker IP detected 5.188.153.248 from KZ: Drop! Dec 16 21:21:11 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Source IP 5.188.153.248 from KZ Dec 16 21:21:11 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Attacker IP detected 5.188.153.248 from KZ: Drop! Dec 16 21:21:15 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Source IP 5.188.153.248 from KZ Dec 16 21:21:15 sec8 info tmm7[11545]: Rule /Common/block_unexpected_geolocation <FLOW_INIT>: Attacker IP detected 5.188.153.248 from KZ: Drop! Similarly, it is sometime easier to whitelist or allow only specific Geolocation to access the protected Virtual Server. Here is a sample iRule and its Data Group as a possible option. Expected Geolocation (Whitelist) iRule: when FLOW_INIT { set ipaddr [IP::client_addr] set clientip [whereis $ipaddr country] #logging can be removed/commented out if not required log local0. "Source IP $ipaddr from $clientip" if { not [class match $clientip equals expected_geolocations] } { log local0. "Attacker IP detected $ipaddr from $clientip: Drop!" #logging can be removed/commented out if not required drop } } Data Group: root@(sec8)(cfg-sync Standalone)(Active)(/Common)(tmos)# list ltm data-group internal expected_geolocations ltm data-group internal unexpected_geolocations { records { US { data US } } type string } sample curl command which will source the specified interface IP address [root@asm6:Active:Standalone] config # ip add | grep 5.188.153.248 inet 5.188.153.248/32 brd 5.188.153.248 scope global fop-lan [root@asm6:Active:Standalone] config # curl --interface 5.188.153.248 -k https://172.16.8.84 curl: (7) Failed to connect to 172.16.8.84 port 443: Connection refused Mitigation: Prevent based on IP Reputation IP Reputation can be used along with many features in the BIG-IP. IP reputation is enabled thru an add-on license and when licensed, the BIG-IP downloads an IP reputation database and is checked against the IP traffic, usually done during connection establishment and matches the IP's category . If a condition to block a category is set, depending on the BIG-IP feature being used, the connection can be dropped or TCP reset or even, return a HTTP custom response page. It is possible that IPs with bad reputation will send the attack traffic during a HTTP Brute Force attack and blocking these categorised bad IP will help in lessening the traffic that a website needs to process. Example: Using LTM Policy Using a LTM Policy, IP reputation can be checked and be TCP Reset if the IP matches a defined category [root@sec8:Active:Standalone] config # tmsh list ltm policy IP_reputation_bad ltm policy IP_reputation_bad { draft-copy Drafts/IP_reputation_bad last-modified 2019-12-17:15:08:52 rules { IP_reputation_bad_reset { actions { 0 { shutdown client-accepted connection } } conditions { 0 { iprep client-accepted values { BotNets "Windows Exploits" "Web Attacks" Proxy } } } } } status published strategy first-match } Verifying the connection was TCP Reset after the Three Way Handshake via tcpdump tcpdump -nni 0.0:nnn host 72.52.179.174 15:15:12.905996 IP 72.52.179.174.8500 > 172.16.8.84.443: Flags [S], seq 4061893880, win 29200, options [mss 1460,sackOK,TS val 313915334 ecr 0,nop,wscale 7], length 0 in slot1/tmm0 lis= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=63 inport=23 haunit=0 priority=0 peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0 15:15:12.906077 IP 172.16.8.84.443 > 72.52.179.174.8500: Flags [S.], seq 2839531704, ack 4061893881, win 14600, options [mss 1460,nop,wscale 0,sackOK,TS val 321071554 ecr 313915334], length 0 out slot1/tmm0 lis=/Common/vs-172.16.8.84 flowtype=64 flowid=56000151BD00 peerid=0 conflags=100200004000024 inslot=63 inport=23 haunit=1 priority=3 peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0 15:15:12.907573 IP 72.52.179.174.8500 > 172.16.8.84.443: Flags [.], ack 1, win 229, options [nop,nop,TS val 313915335 ecr 321071554], length 0 in slot1/tmm0 lis=/Common/vs-172.16.8.84 flowtype=64 flowid=56000151BD00 peerid=0 conflags=100200004000024 inslot=63 inport=23 haunit=0 priority=0 peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0 15:15:12.907674 IP 172.16.8.84.443 > 72.52.179.174.8500: Flags [R.], seq 1, ack 1, win 0, length 0 out slot1/tmm0 lis=/Common/vs-172.16.8.84 flowtype=64 flowid=56000151BD00 peerid=0 conflags=100200004808024 inslot=63 inport=23 haunit=1 priority=3 rst_cause="[0x273e3e7:998] reset by policy" peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0 Using an iRule Using an iRule, IP reputation can be checked and if the client IP matches a defined category, traffic can be dropped See reference article https://clouddocs.f5.com/api/irules/IP-reputation.html In this example iRule, if a source IP address matches any of IP reputation categories, it will be dropped. #Drop the packet at initial packet received if the client has a bad reputation when FLOW_INIT { # Check if the IP reputation list for the client IP is not 0 if {[llength [IP::reputation [IP::client_addr]]] != 0}{ log local0. "[IP::client_addr]: category: \"[IP::reputation [IP::client_addr]]\"" #remove/comment log if not needed # Drop the connection drop } } Generated log for blocked IP with bad reputation Dec 17 16:22:49 sec8 info tmm6[11427]: Rule /Common/ip_reputation_block <FLOW_INIT>: 72.52.179.174: category: "Proxy {Mobile Threats}" Final Thoughts on LTM based iRule and LTM Policy Mitigations The usage of iRule and LTM policy for mitigating HTTP Brute Force Attacks are great if there is only LTM module provisioned in the BIGIP and situation requires quick mitigation. iRules are community supported and are not officially supported by F5 Support. The sample iRules here are tested in a lab environment and will work based on lab scenario which are closely modeled on actual observed attacks. iRules are best configured and implemented by F5 Professional Services which works closely with customer and scope the functionality of the iRule as per customer requirement. Some of the mitigation can be done thru LTM Policy. LTM Policy is a native feature of BIGIP and unlike iRules, does not need "on the fly compilation", and thus will be faster and is the preferred configuration over iRules. LTM Policy configuration are straightforward while iRules can be complicated but also flexible and its advantage over LTM Policies. Rate limiting requests during HTTP Brute Force attack may be away to preserve some of the legal requests and using iRules, flexible approaches can be done. There are more advanced mitigation for HTTP Brute Force attacks using the Application Security Manager (ASM) Moduleand is preferred over iRules. Example in BIGIP version 14 for ASM, TLS Fingerprinting is a functionality included in the ASM Protection Profiles. TPS based mitigation can also be configured using the ASM protection profiles - example, if request from a source IP is exceeding the defined request threshold, it can be action-ed as configured - example, blocked or challenged using CAPTCHA. Using an ASM Security Policy, attacks such as Credential Stuffing can be mitigated using the Brute Force Protection configuration. Bots can also be categorized and be allowed or challenged or blocked using Bot Defense Profile and Bot Signatures.2.4KViews6likes0CommentsHTTP Brute Force Mitigation Playbook: Overview - Chapter 1
Overview When we talk about Brute Force attacks, we usually tend to think about a malicious actor using a script or botnet to inject credentials into a login form in order to try to brute force their way past an authentication mechanism, but that is far from the only kind of brute force attack we see in the wild today, with attacks against API endpoints becoming increasingly common as traditional web development gives way to an API-centric, cloud-driven microservices model alongside moves to federated authentication for services like Office 365. While many of these moves are great for scalability and accessibility, they also open up an increasingly large attack surface that malicious actors are beginning to take advantage of. In this document, we aim to show you some of the BIG-IP tools and techniques available to mitigate brute force attacks against your organisation, as well as sample configurations you can use as a basis for part of your security configuration. Introduction In this series of articles we will show you the BIG-IP tools and techniques you can leverage to understand, classify and mitigate brute force attacks using: BIG-IP AVR Analytics BIG-IP LTM, iRules and Local Traffic Policies BIG-IP ASM with ASM Brute Force protections Bot Defence Fingerprinting (TLS Fingerprinting & HTTP Fingerprinting) L7DoS protections We will coverthe following kinds of Brute Force attack: Attacks against traditional HTML form-based authentication pages "Low and slow" attacks against form-based authentication or other form-based submissions API attacks against authenticated and non-authenticated API endpoints Outlook Web Access/Outlook 365 authentication brute force attacks All configuration examples and suggested mitigation methods will be based on features available in BIG-IP 14.1 and later, and at the end of this document you will find an Appendix with example configurations summarised and presented for easy deployment. Chapters Bad Actor Behaviours and Gathering Statistics using BIG-IP LTM Policies, iRules and BIG-IP AVR | Chapter 2 BIG-IP LTM Mitigation Options for HTTP Brute Force Attacks | Chapter 3 Protecting HTML Form Based Authorization using ASM | Chapter 4 Using the Bot Profile for Brute Force Mitigation | Chapter 5 Slow Brute Force Protection Using Behavioural DOS | Chapter 6 Appendix1.7KViews6likes0CommentsHTTP Brute Force Mitigation Playbook: Bad Actor Behavior and Gathering Statistics using BIG-IP LTM Policies, iRules and BIG-IP AVR - Chapter 2
Gathering Statistics and Bad Actor Behavior In this Chapter of the HTTP Brute Force Mitigation Playbook series, we will review BIG-IP Local Traffic Manager (LTM) and Application Visibility and Reporting (AVR) modules' features to show how it can used to gather statistics and how we can use these statistics as a base line of what an organization may consider as normal traffic and malicious traffic and how it relates to HTTP Brute Force Attacks. Normal Traffic and Malicious Traffic Definition Normally, we consider normal traffic as client activities associated to an application that is expected and does not affect availability and performance. Malicious traffic, on the other hand, are traffic bound to hit any public facing application or device with the goal of impacting availability, performance and its security. We will focus on malicious traffic related to HTTP Brute Force Attacks. Bad Actor Behavior In the case of HTTP Brute Force Attacks, it is common to observe these considered Bad Actor Behavior from malicious traffic: A spoofed HTTP User-Agent header Traffic from unexpected Geolocation(s) Excessive traffic from a certain source IP Multiple Login attempts on a web application from the same or distributed source IP using different credentials HTTP requests to an application that iterates thru a HTTP parameter's value either from a single or distributed traffic source The malicious traffic could be sent slowly and in low volume in an attempt to be more stealthy and applying on or more of the Bad Actor Behavior combinations or also be sent thru high volume traffic and will have noticeable impact such as high CPU usage on the application server Prerequisites Managing the BIGIP configuration requires Administrator access. Ensure access to Configuration Utility (Web GUI) and SSH is available. These management interfaces will be helpful in configuring, verifying and troubleshooting on the BIGIP.Having access to serial console output of the BIGIP is also helpful. When working with F5 Support, having an F5 Support/AskF5 account will be helpful.To access your AskF5 account, access the following link: https://support.f5.com/csp/my-support/home Another helpful site is F5 ihealth, where qkviews extracted from the BIGIP device can be uploaded and analysed. Logs, statistics, heuristics, graphs and BIGIP configuration extracted thru a qkview are available in ihealth when uploaded and will aid in diagnosing and troubleshooting efforts. https://ihealth.f5.com/ Register an account to these F5 Sites as these will helpful in working with F5 Support.When troubleshooting, generating a qkview or extracting bigip logs or viewing configuration in the GUI require access to the BIGIP and having the administrative access to it and referenced F5 Support sites will be helpful. Local Traffic Manager (LTM) and Application Visibility and Reporting (AVR) license are required to use the related features. LTM and AVR modules should be provisioned to access related configuration discussed in this chapter. Using Local Traffic Manager (LTM) features to gather HTTP request data Local Traffic Manager (LTM)features such as iRules and LTM Policy can be used to log messages generated from a certain stage of the HTTP request. In the case of a HTTP brute force attack, inspecting the source IP, HTTP headers, the HTTP request payload can provide information on what malicious traffic may look like. Required License and Provisioning BIG-IP LTM module should licensed and provisioned for LTM features to be available. A "harmless" HTTP request The following "curl" command will receive a proper response from a web site it is requesting content from. curl --silent --output /dev/null -lvk https://172.16.8.84 -H "User-Agent: im-a-web-browser" The web server does not validate whether this curl request is coming from a legitimate web browser as a normal user would use.the curl "-H" option allows a HTTP header to be included in the curl HTTP request, in this case "User-Agent: im-a-web-browser" will be the HTTP User-Agentand value that the web server will receive. A legitimate HTTP User-Agent value can also be used for this option and the web server will simply accept and process the request without validating whether the request is from a real web browser or an automated client . Using Local Traffic Manager (LTM) Policy to log HTTP request data A Local Traffic Manager (LTM) Policy can be applied to a Virtual Server and match a traffic pattern and execute defined actions. LTM Policy configuration is straightforward and is able to log essential data, such as in the case of a HTTP Brute Force attack, gather source IP, HTTP headers and HTTP payload. It can log locally or remotely. Here is a sample LTM Policy configuration where it will log the Source client IP address and port, HTTP header names and its interesting values and the HTTP Payload for all traffic at HTTP request time. From the BIG-IP Configuration Utility: Local Traffic››Policies : Policy List››/Common/log_http_traffic:log_http_request_header_payload log_http_traffic is the name of the LTM Policy log_http_request_header_payload is a rule in the log_http_traffic LTM Policy This view is only in Draft state of the LTM Policy These are configured in the message section of the "Log" action lines. notice the second Log action specifically logs the User-Agent and X-Forwarded-For HTTP headers. Common in HTTP Brute Force Attacks, the HTTP User-Agent header's value may be a slightly modified compared to a legitimate User-Agent value - imitating a legitimate web browser. Similarly, the X-Forwarded-For value may provide some hints where the attack traffic may really be coming from. In scenarios where the attacker may be passing thru web proxies, reviewing the X-Forwarded-For header value can be helpful in determining possible thru source of the attack traffic. Log action 1: tcl: client [IP::client_addr]:[TCP::client_port] -> URL: [HTTP::host][HTTP::uri]: http headers are [HTTP::header names] and the http payload is [HTTP::payload] Log action 2: tcl: client [IP::client_addr]:[TCP::client_port] -> URL: [HTTP::host][HTTP::uri]: interesting HTTP headers are User-Agent:[HTTP::header User-Agent] X-Forwarded-For [HTTP::header X-Forwarded-For] Notice that the Log action references "tcl". LTM Policy actions accepts tcl expression and is commonly used in iRules. iRule commands such as the[HTTP::header] can be used in LTM policies. Important: The iRule commands used in this sample LTM policy are for demonstration purpose only. Complex LTM Policies with iRule commands should be properly tested. F5 Professional Services is the best resource for this type of implementation. log Facility local0 refers to the /var/log/ltm and thus, generated logs will be in /var/log/ltm for this LTM Policy. other log files can be also specified and remote logging is also possible. Here is the Published LTM Policy The sample LTM Policy is then applied to a Virtual Server where malicious traffic is suspected to be hitting Here is the text form of the Published LTM Policy. log_http_request_header_payload 1. Log message 'tcl: client [IP::client_addr]:[TCP::client_port] -> URL: [HTTP::host][HTTP::uri]: http headers are [HTTP::header names] and the http payload is [HTTP::payload]' at request time. 2. Log message 'tcl:client [IP::client_addr]:[TCP::client_port] -> URL: [HTTP::host][HTTP::uri]: interesting HTTP headers are User-Agent:[HTTP::header User-Agent] X-Forwarded-For:[HTTP::header X-Forwarded-For]' at request time. log generated in /var/log/ltm Nov 29 15:24:22 BIG-IP info tmm7[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.7.31:39063 -> URL: 172.16.8.86/: http headers are Host Accept User-Agent X-Forwarded-For Content-Length Content-Type and the http payload is admin:admin Nov 29 15:24:22 BIG-IP info tmm7[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.7.31:39063 -> URL: 172.16.8.86/: interesting HTTP headers are User-Agent:im-a-web-browser X-Forwarded-For:88.88.88.88 Using grep, awk and uniq in the BIG-IP bash prompt, /var/log/ltm can be inspected to check the how frequent a source client IP sent a HTTP request and along with it, HTTP header and Payload information, to the Virtual Server where the LTM Policy is applied. Here, the sample grep/awk/uniq command will count the number of times the client source IP was observed from the sample LTM Policy generated log /var/log/ltm. In the sample LTM Policy, it logs the client source IP twice, as such the sample grep/awk/uniq command was written to take account of this. grep/awk/uniq sample command 1: grep client /var/log/ltm | awk -F " " '{print $9}' | uniq -c | awk -F ":" '{print $1}' | awk -F " " '{print $2}' | uniq -c sample output: root@BIG-IP:Active:Standalone] config # grep client /var/log/ltm | awk -F " " '{print $9}' | uniq -c | awk -F ":" '{print $1}' | awk -F " " '{print $2}' | uniq -c 9 172.16.7.31 grep/awk/uniq sample command 2: grep interesting /var/log/ltm | awk -F " " '{print $18}' | awk -F ":" '{print $2}' | uniq -c This sample command will count the number of time the X-Forwarded-For HTTP Header appeared in the log. the word 'interesting' was initially filtered as it is the line where theX-Forwarded-For information was logged. sample output: root@BIG-IP:Active:Standalone] config # grep interesting /var/log/ltm | awk -F " " '{print $18}' | awk -F ":" '{print $2}' | uniq -c 20 88.88.88.88 The sample grep/awk/uniq command are just simple demonstration commands only. You can build your own scripts, version of the command to get the desired information.The sample output are also for demonstration purpose only. In a real HTTP brute force attack, The sample LTM Policy to log the client source IP and HTTP information in the request will be helpful in describing the possibly malicious traffic. Using F5 iRules to gather HTTP request data From the F5 iRule Home Page, https://clouddocs.f5.com/api/irules/: F5 iRules is a powerful and flexible feature within the BIG-IP® local traffic management (LTM) system that you can use to manage your network traffic. The iRulesTM feature not only allows you to select pools based on header data, but also allows you to direct traffic by searching on any type of content data that you define. Thus, the iRules feature significantly enhances your ability to customize your content switching to suit your exact needs. iRules can also be used to log source IP and HTTP information. In comparison to LTM Policy, iRules are much more flexible and can also be very complex. Depending on how much information you would like to gather during a HTTP Brute Force attack, iRules can be very verbose. iRules are also resource intensive, the more it gets executed as traffic hits a Virtual Server where it is applied, it will consume additional CPU resource as well. If the same information can be gathered using LTM Policy feature, use it instead of iRules. For simplicity we will gather the same HTTP request data as demonstrated in the LTM Policy section. Important: The iRule commands used in the sample iRules are for demonstration purpose only. Complex iRules should be properly tested. F5 Professional Services is the best resource for this type of implementation. In the Configuration Utility,Local Traffic››iRules : iRule List›› can be created. Sample iRule, log_http_request_header_payload The iRule is then applied to a Virtual Server where suspected malicious traffic is hitting Sample iRule: # Log debug to /var/log/ltm? 1=yes, 0=no when RULE_INIT { set static::payload_dbg 1 } when HTTP_REQUEST { set LogString "Client [IP::client_addr]:[TCP::client_port] -> [HTTP::host][HTTP::uri]" if {$static::payload_dbg}{log local0.debug "=============================================" } if {$static::payload_dbg}{log local0.debug "$LogString (request)"} # log each Header. foreach aHeader [HTTP::header names] { if {$static::payload_dbg}{log local0.debug "$aHeader: [HTTP::header value $aHeader]"} } if {$static::payload_dbg}{log local0.debug "============================================="} if {[HTTP::header "Content-Length"] ne "" && [HTTP::header "Content-Length"] <= 1048000} { HTTP::collect [HTTP::header "Content-Length"] } else { HTTP::collect 1048000 } } when HTTP_REQUEST_DATA { # Log the bytes collected if {$static::payload_dbg}{log local0.debug "Collected [HTTP::payload length] bytes"} # Log the payload locally if {[HTTP::payload length] < $static::max_chars}{ if {$static::payload_dbg}{log local0.debug "Payload=[HTTP::payload]"} } } Sample log output in /var/log/ltm: Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: ============================================= Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: Client 172.16.7.31:10761 -> 172.16.8.86/ (request) Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: Host: 172.16.8.86 Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: Accept: */* Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: User-Agent: im-a-web-browser Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: X-Forwarded-For: 88.88.88.88 Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: Content-Length: 11 Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: Content-Type: application/x-www-form-urlencoded Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST>: ============================================= Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST_DATA>: Collected 11 bytes Nov 29 19:46:03 BIG-IP debug tmm4[11545]: Rule /Common/log_http_request_header_payload <HTTP_REQUEST_DATA>: Payload=admin:admin Using the same grep/awk/uniq commands, we can observe the number of times a client has sent a HTTP request to a Virtual Server using the sample iRule. grep/awk/uniq sample command 3: grep "<HTTP_REQUEST>: Client" /var/log/ltm | awk -F " " '{print $11}' | awk -F ":" '{print $1}' | uniq -c sample output: root@BIG-IP:Active:Standalone] config # grep "<HTTP_REQUEST>: Client" /var/log/ltm | awk -F " " '{print $11}' | awk -F ":" '{print $1}' | uniq -c 13 172.16.7.31 grep/awk/uniq sample command 4: grep "<HTTP_REQUEST>: User-Agent" /var/log/ltm | awk -F " " '{print $11}' | uniq -c The sample command will count the number of HTTP requests a specific User-Agent has sent to a Virtual Server sample output: root@BIG-IP:Active:Standalone] config # grep "<HTTP_REQUEST>: User-Agent" /var/log/ltm | awk -F " " '{print $11}' | uniq -c 13 im-a-web-browser Additional Notes for LTM Policy and iRules HTTP request data gatheringfor HTTP Brute Force attack The sample iRule and LTM Policy provided are just sample approach in observing the traffic to a Virtual Server suspected of being attacked during a HTTP Brute Force attack.There are more LTM Policy and iRule events that can be used to gather data and observe it to qualify certain characteristics of the traffic that can be considered malicious. In both the LTM Policy and iRule samples, we logged HTTP Payload output. During a HTTP Brute Force attack, an attacker - perhaps characterized by a certain source IP or a suspicious User-Agent value,may be sending to the HTTP based application random username and password or, legitimate looking credentials which are normally extracted from a breach,in attempt to guess a legitimate user credentials to compromise and to further do damage. The generated logs can then be used for your analysis and decision making steps in the mitigation of the attack. While the logs that will be generated are useful during a HTTP Brute Force attack, these logs may be excessive and may not be needed when there is noHTTP Brute Force attack event. Removing the sample iRule or LTM Policy from the Virtual Server would prevent further logging. Using F5 Application Visibility and Reporting to gather HTTP request data From the BIG-IP Analytics: Implementations Manual: https://techdocs.f5.com/kb/en-us/products/big-ip_analytics/manuals/product/big-ip-analytics-implementations-14-1-0/01.html#guid-ce2cb6f5-24b4-4570-a827-b8b4db8e19e8 Analytics, or Application Visibility and Reporting (AVR), is a module on the BIG-IP® system that you can use to visually analyze the performance of web applications, TCP traffic, DNS traffic, FastL4, and overall system statistics. The statistics are displayed in graphical charts where you can drill down into a specific time range or system aspect to better understand network performance on certain devices, IP addresses, memory and CPU utilization, and so on. You can further focus the statistics in the charts by selecting dimension entities such as applications or virtual servers. About HTTP Analytics profiles An HTTP Analytics profile ( Local Traffic > Profiles > Analytics > HTTP Analytics ) is a set of definitions that determines the circumstances under which the system gathers, logs, notifies, and graphically displays information regarding traffic to an application. Each monitored application is associated with an HTTP Analytics profile. You associate the HTTP Analytics profile with one or more virtual servers used by the application. Each virtual server can have one HTTP and/or one TCP Analytics profile associated with it. Required License and Provisioning BIG-IP AVR module should be licensed and provisioned for AVR features to be available. Here is a sample screenshot of data gathered from a Virtual Server with HTTP Analytics profile applied. This can be viewed from the Configuration Utility Statistics››Analytics : HTTP : Overview We can clearly see the HTTP data gathered are common HTTP statistics that we would like to observe and have an understanding during a HTTP Brute force attack, Source Client IP address, Countries, User Agents, URLs and more and its corresponding statistics such as Average Transaction per second, Average Server Latency (ms) and more, are helpful information to help deciding what possible mitigation will be useful to quickly mitigate the attack. In the sample screenshot, we can see the User Agents statistics that "curl/7.47.1" sent bulk of the request to the Virtual Server. For this sample scenario, If this client HTTP User Agent is not expected and the HTTP application expects only real web browsers, the User Agent can be considered to be blocked from accessing the Virtual Server. Similarly, specific IP addresses or Countries may be unexpected traffic sources and are sending high volume traffic, these can also be considered to be blocked depending on what is considered expected traffic. These AVR statistics can also be exported in PDF format for offline review. HTTP Analytics profile can created from the Configuration Utility, Local Traffic››Profiles : Analytics : HTTP Analytics Here is a sample screenshot of a custom HTTP Analytics profile and is applied to a Virtual Server. Additional Notes for AVR HTTP request data gatheringfor HTTP Brute Force attack Public facing HTTP application are expected to receive malicious traffic. ProvisioningAVR module, configuring a HTTP Analytics profile and applying it to a HTTP Virtual Server in advance before a HTTP Brute Force attack happens provides advantages. It allows the gathering of HTTP traffic related statistics to understand the HTTP traffic pattern for a particular Virtual Server and helps in defining a traffic baseline and what normal traffic looks like and several other characteristic of traffic such as source IP its geolocation/country information, User Agent information and more. As an example of howAVR HTTP request data gatheredmay be used for HTTP Brute Force attack prevention, an application's login URL may be receiving excessive requestsas observed in Configuration Utility Statistics››Analytics : HTTP : Overview for a HTTP based Virtual Server with the HTTP analytics applied - in AVR, URL statistics can be reviewed and "Ave TPS" and "Transactions" are available - and application owner may define this an attack. Without HTTP Analytics profile applied to a Virtual Server, alternative manual methods or monitoring system would need to have been in place to describe the suspected HTTP Brute Force attack event.HTTP traffic information, very common, the User Agent may be a spoofed and upon reviewing the amount of transaction it sent during the HTTP Brute Force attack event, application owner may decide to block the specific User Agent value. Sample Application of AVR and LTM policy Data gathering techniques In this sample HTTP Analytics screenshot, the login URL /WackoPicko/users/login.php received high volume transaction from a specific source IP address. A closer look at the page information, an unusual looking User Agent was sending the suspicious amount of traffic. An application owner may consider this as an attack to the login URL and have the specific IP address or the User Agent to be blocked. Applying the sample LTM Policy where HTTP request data was logged, in reference to the HTTP AVR data screenshot,it can be observed that a HTTP Brute Force Attack was happening.username=!@#$%25 and random password are being sent to the login URL /WackoPicko/users/login.php. sample logs of a HTTP Brute Force Attack captured using thesample LTM Policy [root@BIG-IP:Active:Standalone] config # grep username /var/log/ltm | head Nov 30 17:54:10 BIG-IP info tmm4[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49133 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=12345678 Nov 30 17:54:10 BIG-IP info tmm5[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49134 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=jessica Nov 30 17:54:10 BIG-IP info tmm6[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49135 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=lovely Nov 30 17:54:10 BIG-IP info tmm4[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49136 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=abc123 Nov 30 17:54:10 BIG-IP info tmm5[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49137 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=monkey Nov 30 17:54:10 BIG-IP info tmm6[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49138 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=123456789 Nov 30 17:54:10 BIG-IP info tmm7[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49139 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=654321 Nov 30 17:54:10 BIG-IP info tmm[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49140 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=12345 Nov 30 17:54:10 BIG-IP info tmm1[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49141 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=daniel Nov 30 17:54:10 BIG-IP info tmm2[11545]: [/Common/log_http_traffic/log_http_request_header_payload]: client 172.16.5.40:49142 -> URL: 172.16.8.86/WackoPicko/users/login.php: http headers are Host User-Agent Cookie Content-Length Content-Type and the http payload is username=!@#$%25&password=princess Using the sample generated logs during the HTTP Brute Force attack, running a combination of grep/cut/awk/uniq/sort command provides insight the top 10 username's approximate number login attempts grep/cut/awk/uniq/sort sample command: grep username /var/log/ltm | cut -d " " -f 27 | awk -F "&" '{print $1}' | awk -F "=" '{print $2}' | uniq -c | sort -rnk 1 | head sample output: [root@sec8:Active:Standalone] config # grep username /var/log/ltm | cut -d " " -f 27 | awk -F "&" '{print $1}' | awk -F "=" '{print $2}' | uniq -c | sort -rnk 1 | head 1006 aberrational 1006 abe 1006 abdul 1006 abduct 1006 abdominally 1006 abdominal 1006 abdomen 1006 abdication 1006 abcdefg 1006 abcdef Similarly, looking specifically for a username's approximate number of login attempts- in this example, '!@#$%25' sample command: grep '!@#$%25' /var/log/ltm | wc -l sample output: [root@sec8:Active:Standalone] config # grep '!@#$%25' /var/log/ltm | wc -l 4024898Views3likes0CommentsHTTP Brute Force Mitigation Playbook: Protecting HTML Form Based Authorization using ASM - Chapter 4
Introduction For a long time HTML Form based authorization was - and possibly still is - the most common authorization type in use by websites on the internet; recently, of course, we have seen a move to syndicated authorization mechanisms (OAuth, OpenID etc) and "Web2.0" technologies that utilise client side JavaScript and JSON, XML or another representation layer to transport credentials. In this chapter, however, we will concentrate on HTML Form Based Authorization where the application flow will commonly consist of a series of browser requests, each resulting in either a new page being rendered in the browser or a subsequent browser request. For example, a common login flow might look like: GET /login.html HTTP/1.1 Host: zuul HTTP/1.1 200 OK Content-Length: 175 Content-Type: text/html <html> <head> </head> <body> <form method="POST" action="/loginsubmit.php"> <label for="username">Username:<input name="username" type="text"></label><br> <label for="password">Password:<input name="password" type="password"></label><br> <input name="submit" type="submit"> </form> </body> </html> POST /loginsubmit.php HTTP/1.1 Host: zuul Content-Length: 41 Content-Type: application/x-www-form-urlencoded username=test&password=xxxx HTTP/1.1 200 OK Content-Length: 58 Content-Type: text/html; charset=UTF-8 <html> <head> </head> <body> Bad password </body> </html> Or to visualise this in the browser, the first step is a GET request which returns the login page: And the second step is a POST request submitting the entered credentials which returns a success/fail message: Of course, the precise flow will differ depending on the application - most commonly there are variations in the response returned for either a valid or invalid login, and when configuring the Brute Force protections in ASM the administrator will need to understand the application flow and understand the requests and responses that are expected in both a valid and invalid login flow. Chrome's Developer Tools or an intercepting proxy such as Burp or ZAP can help you capture the login flow in a form that can be exported (e.g. a HAR file) and sent to others for analysis if necessary. Preparing the Environment As mentioned in the introduction, you should familiarise yourself with the application flow. Questions to ask yourself are: What is the entry point to the login flow? What request is made in order to render the login page to the user? What does a successful login look like? What is the HTTP response code returned by the server for a valid login - common examples being 200 (OK) and 302 (Moved Temporarily) Is there any identifiable content within the response that indicates a valid login other than the response code? Are there any headers specific to a valid login? What does an unsuccessful login look like? What is the HTTP response code returned by the server for an invalid login, and is it different to that of a valid login - common examples, again, being 200 (OK) and 302 (Moved Temporarily) Is there any identifiable content within the response that indicates an invalid login attempt other than the response code? Are there any headers specific to an invalid login? It is crucial that you determine how to identify the difference between a valid and invalid login flow, as you will need that detail in order to configure the ASM. Configuring Brute Force Protection In this section we will run through the basic settings necessary in ASM to configure Brute Force protection to protect the login form you have examined in the earlier sections; we will not necessarily dive deeply into all of the configuration options, but these steps should provide a solid starting point and good basic protection. We will assume that you already have the following in place: A Virtual Server handling the application traffic A suitable Logging Profile associated with the Virtual Server to ensure that relevant requests are logged A basic ASM policy with Enforcement Mode set to Blocking is already associated with the Virtual Server Define the Login Page Navigate to Security → Application Security → Sessions and Logins → Login Pages List Click Create Define the URL This is the place to which your application submits credentials - in the example above, that is /loginsubmit.php Set Authentication Type to HTML Form Set the HTTP parameter that contains the username - in the example above, that is 'username' Set the HTTP parameter that contains the password - in the example above, that is 'password' Using the Access Validation options, define how the ASM should detect a valid (or invalid in the case of the "NOT" options) login, e.g. Click Create Define the Brute Force settings Navigate to Security → Application Security → Brute Force Attack Prevention Click Create Select your previously defined Login Page using the drop-down list beside "Login Page" Select your Source-based Brute Force Protection mitigation options Click Create Click Apply Policy When configuring your mitigation options, it is important to understand the types of clients you expect to serve as valid clients and that you respect their capabilities when selecting mitigation mechanisms. For example, if you expect users using desktop or mobile browsers, both CAPTCHA and Client Side Integrity options will work successfully, however if you expect a mix of browser and non-browser clients (mobile applications or Microsoft Outlook, for example) then both CAPTCHA and Client Side Integrity challenges will cause failures for users using the non-browser clients due to the JavaScript they inject - in that case, consider using only mitigations that are tied to specific Source IP addresses, and non JavaScript based mitigations (such as Alarm and Drop) along with a shorter Maximum Prevention Duration so that non-browser clients who enter an incorrect password and trigger the protection mechanism are not denied access for too great a period of time. Whitelisting You may need to consider whitelisting specific clients or IP ranges - for example, you may have a Virtual Server servicing both internal and external clients, and/or need to whitelist IP addresses belonging to a Virtual Private Network. You can do this simply via the GUI, e.g.: Navigate to Security→Application Security→IP Addresses→IP Address Exceptions Click Create Enter the IP Address and Netmask you wish to Whitelist Check "Ignore in Brute Force Detection" Click Create, e.g. Click Apply Policy926Views3likes0CommentsHTTP Brute Force Mitigation Playbook: Bot Profile for Brute Force Mitigations - Chapter 5
Bots traffic can be challenging to handle in general and bots that do brute force or credentials staffing In specific requires powerful bot detection.Advance WAF bot profile released in version 14.1. is a powerful bot manager for protecting your web application As of version 14.1Bot profile is a dedicated servicethat works in parallel to the ASM policy and to the Dos profile. This article describes the configuration options of the bot profile in the Advance WAF version 14.1 and above when dealing with brute force or credential stuffing attacks. When to use it: General use and visibility : Bot profile is a powerful on premises bot manager that allows control over the type of traffic accessing the web application.By classifying the bots traffic to known good or known bad it is easier to manage the bot traffic. Bot profile can be used before or during automated attack. The optimize approach is to set the bot profile in transparent mode (no traffic will be blocked) and understand the traffic arriving at the application. Only then make educated decisions on traffic SAP (site access policy) The trusted bot allows search engines and monitoring tools to continue working as expected and the reporting provides valuable information on the traffic Brute force For brute force attacks bot profile can detect and prevent the automation of the attack agent by anomalies, user agent and other detection technique. Pre configuration Bot profile and logging profile The new bot profile should be assign to the relevantvirtual server.Under:Local Traffic->Virtual Servers:Virtual Server List->virtual_server_name->Bot Defense Profile -> from the defaults options choose “bot-defense” The bot defense should also be assigned with log profile. Choose the local-bot-defense from the available and move it to the selected.While the default logging profile is good enough you can create a new logging profile for bot defense under Security->Event Logs->Logging Profiles -> create -> choose Click update. DNS resolver Bot profile uses a revers DNS lookup check to identify known good search engines. Therefor a DNS server and a DNS resolver must be configured on the BIGIP. DNS server is under System->Configuration->Device->DNS DNS resolver is under Network->DNS Resolver->DNS Resolver List Caution : using bot profile to mitigate attacks and NOT configuring the DNS resolver may affect analytics data or monitoring crawlers such as google analytics etc . Bot profile concept Bot profile is the unification of several mechanism into one that is now called bot profile. The new bot profile allows the automation of classifying request to bots types and according to it the applied prevention policy is done. Browser – classified as a browser, additional checks will be done such as browser capabilities, sources scoring and more Mobile App - Mobile App with SDK, Mobile App Trusted bot - Signature categories: Search Engine Untrusted bot - Signature categories: Crawler, HTTP Library, Headless Browser, RSS Reader, Search Bot, Service Agent, Site Monitor, Social Media Agent, Web Downloader Suspicious Browser - Anomaly categories: Suspicious Browser Type, Suspicious Browser Extension Malicious Bot : Signature categories: DOS Tool, E-Mail Collector, Exploit Tool, Network Scanner, Spam Bot, Spyware, Vulnerability Scanner, Web Spider Anomaly categories: Browser Automation, Malicious Browser Extensions, Headless Browser Anomalies, Browser Masquerading, Mobile App Automation, Mobile App Masquerading, Illegal Mobile App, Search Engine Masquerading, OWASP Automated Threat Anomalies, Classification Evasion Each class type is getting the prevention policy that was defined in the template. Bot profile - General settings Profile Template is defied once when the bot profile is createdand includes several modes: Relaxed Mode – security policy that is not instructive because no java script is injected for source identification nor for device ID java script. The only blocked bots are the ones that are classified as malicious bots. The rest of the bots will be inalarm only. Balanced Mode – security policy that is more intrusive and includes java script injection to identify the sources and inject the java script to generate device ID. This mode is considered moreintrusive because the java script is injected in the first response ( after access).The blocked by default bots classification are malicious bot, suspicious browser will get CATPCHA and unknown bots will get rate limit. Strict Mode – security policy that is most intrusive and inject the java script on the first request ( before access) ,. This approach is considered intrusive but much more accurate in detection. By default all classified bot types will be blocked except trusted bot that will be alarmed. It is recommended to do the initial implementation with relaxed policy mode for visibility and false positive reduction so that when an attack starts the profile can be moved to blocking mode immediately. When under brute force attack the setting should be changed to set to strict mode so that the attackers will be blocked from any bot types. Note that this includes blocking other bots that might be legitimatebut during attack the priority is to prevent the “you got hacked” scenario. In cases where the web application must have availably over confidentiality then the relax template should be used and gradually adding other policy elements that will tradeoff between the attack mitigation (confidentiality) and the web application performance ( availability ) Mitigation settings tab Bot profile classify the bots traffic into the following types: Trusted Bot- Bots that are detected using search engine signatures.Note that there is no anomaly detection for Trusted Bots,Itmeans that even if anomalies where found, we pass this request. Untrusted Bot-Bots that are detected usingsignaturesfor untrusted bots. This group is covered by bot signature category Benign (excluding the Search Engines). Suspicious Browser- browser clients for whichanomaliesof Suspicious Browser category were detected. Malicious Bot-Bots that are detected using malicious bot signatures or maliciousbehavioralanomalies. Unknown- Bots that were not detected by neither bot signatures nor behavioral anomalies. For example TOR bowsers -should get a captcha to verify that it is not automated . but it will get blocking if the SAP defines that it is not allowed. Suspicious HTTP Headers Presence or Order -A missing header can be indicative of bot activity Browser verification tab Allows to fine tune and override the template setting for the level of intrusiveness Browser verification-defines the intrusive level of the browser classification process. Challenge free – means that no java script injection will be done and only passive detection will be done Verify after access – means the java script will be injected in the first response Verify before access – means that the java script will be injected in the first request The grace period is the time between a successful CAPTCHA challenge solution and when another CAPTCHA challenge can be sent for a request. Device ID mode – define when the java script for generating Device ID will be injected, after access or before access which is first request or first response corresponding. Verification and Device-ID Challenges in Transparent Mode enables to allow challenges and JavaScript injects even though the profile is configured with Transparent Enforcement Mode and no mitigations will be done. The challenges will be logged. Single Page Application if your website is a Single Page Application, meaning a web application that loads new content without triggering a full page-reload. The system will inject JavaScript code to every HTML response. This will allow handling browser verification challenges, Device ID challenges and CAPTCHA without requiring to reload the page. Mobile Applications tab In Mobile Applications setting you can define how requests from mobile application built with the Anti-Bot Mobile SDK clients are handled. This feature requires an Anti-Bot Mobile SDK license to be operational. [Click] For applications that don’t use Anti-Bot Mobile SDK add an “Allowed Application” signature in Applications without Anti-Bot Mobile SDK Note that this relies on analyzing the User-Agent header field only, which can be easily spoofed, and should be used with care. Signature Enforcement tab Bot signature are powerful first line of defense tool that should be enabled in transparentmode to understand the traffic arriving to the web application. When under brute force it is recommended to enforce the bot signature to filter and eliminate traffic access to the web application. Note that white listed sources including IP’s and search engines are allowed if configured properly as noted in the white list tab. In Signature Enforcement setting you can change the enforcement setting. If a signature is in staging, one of two states are shown: Ready to be enforced Waiting for more traffic samples Use the filter to findone or more signatures from the list. Bot defense signature pool The lists ofBot Signatures and Signatures Categories are located under Security››Bot Defense : Bot Signatures. A new signature for a specific bot can be created manually This is covered in more details :Writing Custom Attack Signatures Bot signatures are being update periodically by F5 , this process is document :Updating Attack and Bot Signatures Whitelist tab The Whitelist setting is used to disable mitigation actions or browser verification and Device ID challenges. It is recommended to white list known to be good to exempt them from being classified by the browser verifications process. There is also an option to white list specific IP’s to specific URL’s Each IP that is added to the white list can be exempt from : mitigation action – the actual action done on the classified source e.g. alarm, CAPTCHA ,Rate Limit, Block , TCP Reset, Honeypot page , Redirect to Pool browser verification and Device ID challenges- the classification process of injecting java script to classify the source with capabilities script or device ID injection. Reporting The Bot Traffic summary is located under Security››Event Logs : Bot Defense : Bot Traffic. The graph display the bot types and traffic statistics which will indicate the percentages of human traffic versus bots traffic. When under brute force attack this graph helps in investigating the offending sources. The bot request page provides a list of requests that got caught on the bot profile. Drilling down on requests allows the view by bot type or by incident as well as the mitigation type that was done on them The list of bots is powerful tool to see the sources that are hitting the site and helps understand the type of bots used to execute the brute force attack. The request log shows the reasons for the request classification in the Request details section ( click on all details ) The bot details describethe bot name, calls and category , in cases where the attack is done via browser the bot name will be the browser name. For example:if a brute force is done via burp proxy and a Firefox the bot name will be : User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36 The bot request page allows various filters to be applied to help with better search of the offending brute force bot, the following filter options are helpful when tracking the brute force sources: Specific URL –specify the login URL to see the type of bots access the login. One if those sources is an offending boter (under advance filter) Time range – specify the time range that the brute force occur to examine the bots that access the web application during the attack. (under advance filter) Source IP address – specify the suspicious source IP to see the type of bots arriving from this Ip. more Resources: Bot defense configuration details -https://support.f5.com/csp/article/K423232851.7KViews2likes0CommentsHTTP Brute Force Mitigation Playbook: Slow Brute Force Protection Using Behavioural DOS - Chapter 6
Brute Force attack is where attacker tries to find the password of users quickly, there are times when attacker is not in hurry and do make his attack go under the radar, using very slow brute force attack.It can not be detected by detection criteria of Brute force protection feature of Advanced WAF/ASM reason being if you try to tweak the setting to catch slow brute force attack then its very hard for ASM/Advance WAF to distinguish between attack and legitimate user login atttempt. We may use other protection available in ASM/Advance WAF to protect from Slow brute force attack. In this chapter to protect from Slow brute force attack we will use TLS signature generated by behavioural DOS. But first: Benefits, Limitation and Requirement. Benefits Benefit of using TLS fingerprint: Good and bad Clients can be differentiated based on SSL handshake. Once the Advance WAF/ASM is 100% confident user does not have to do anything to find out unusual/attack traffic pattern. This can be also used to protect mobile application as it does not use Javascript. Limitations To get TLS fingerprinting signature using BADOS legitimate traffic should be learned by Advanced WAF/ASM On ASM, Behavioural DOS can be configured on max 2 virtual servers, where as on Advanced WAF, Although there is no license limitation of attaching DOS profile with BADOS enable to Virtual server but it is not recommended to configure more then 70 BADOS enabled Virtual server per box. Requirements ASM/Advanced WAF license. Appropriate rights to access/make changes from GUI and command line. Some of the reporting is available only if AFM is provisioned in addition to the above mentioned modules. (If AFM is not provisioned you can still find the information using CLI) Proactiveness As a general rule, instead of waiting for attack and then take necessary action, We should always be proactive in defending attack. Preparation for mitigating slow Brute force attack. Slow brute force is very hard to detect, So most important thing to protect application from slow brute force attack is Advanced WAF/ASM should know the normal traffic. For that we can use Behavioral & Stress-based (D)DoS Detection option under DoS Protection profile of Advance WAF/ASM. For Configuring DoS Protection profile, to protect against slow brute force attack using TLS fingerprinting follow the below mentioned steps. Important:For BIG-IP ASM/Advance WAF 14.1.0, you can access theTLS fingerprinting signaturesconfiguration sectiononly when you had previously selectedUse Legacy Application Dos viewin theHTTP Propertiesconfiguration pop-up. Go to Security››DOS Protection ›› Protection Profiles››click create. Enter the profile name as per your requirement, select the family as HTTP and press Commit Changes to System Click on newly shown HTTP and then click configure settings for HTTP Family settings. Next click on Use Legacy Application DOS View Go to Behavioral and stress-based detection under Application security. Change operation mode to blocking and Threshold mode to automatic. Under Behavioral Detection and Mitigation enable Request signature detection along with TLS fingerprinting signatures and Use approved signatures only (In case you don’t want to use unapproved signature). Leave all the settings unchanged and click save and finished. (Make Sure Bad actors behavior detection is unchecked as we want to use TLS signature) Select Mitigation to Standard or as per requirement from available options and then Click save Next apply the newly created dos profile to the appropriate https virtual server. Go toLocal Traffic>Virtual Servers. Select the name of the HTTPSvirtual server. Go toSecuritytab and selectPolicies. ForDoS Protection Profile, selectEnabled. ForProfile, select the DoS profile created in above steps. Select theUpdatebutton. Let the normal traffic pass through the VS. This will allow ASM to learn the traffic. How do we know ASM is ready and is 100% confident about the normal traffic? Login to cli of BIGIP Run command “admd -s vs./Common/<VSname>+/Common/<DOSprofilename.info.learning>” For exampleadmd -s vs./Common/BF-PHP+/Common/ASM-TLS-Fingerprinting.info.learning You will see output as similar to the one mentioned below. vs./Common/BF-Test+/Common/Brute-Force-test.info.learning:[0, 0, 0, 0] Once the traffic starts passing through vs these values will increase. Each value has its own meaning as described below. A.baseline_learning_confidence: Description: in % how confident the system is in the baseline learning. Desired Value: > 90% B. learned_bins_count: Description: number of learned bins Desired Value: > 0 C. good_table_size: Description: number of learned requests Desired Value: > 2000 D. good_table_confidence: Description: how confident, as %, the system is in the good table Desired Value: Must be 100 for signatures You may run the command again if the Behavioral DoS is still learning Still learning admd -s vs./Common/BF-PHP+/Common/ASM-TLS-Fingerprinting.info.learning Behavioural DOS feature is based on learning analysing all traffic to the web application, building baselines, and then identifying anomalies when server stress is detected.So its important to know when server is stress and how to check the server street level. To find out the stress level Go to Security››DoS Protection››Protected Objects(This option is only available if you have AFM Provisioned) Find out the VS for which you would like to check the status and Click the arrow below Attack status. Once you click you will detailed informed is displayed on the screen, which includes Server Stress To check the Server stress using CLI you may run below mentioned command. admd -s vs./Common/<VSname>+/Common/<DOSprofilename.sig.health> Server Stress value Range: If there is no traffic server value is 0.5 If server functions properly value is between (0,1) Value higher then 1 is considered as load and mitigation may be applied for example admd -s vs./Common/BF-PHP+/Common/ASM-TLS-Fingerprinting.sig.health Once the output of below command shows appropriate values (as mentioned above) which tells ASM is confident, ASM is ready to differentiate between normal and attack traffic. Below output shows ASM is 100% confident admd -s vs./Common/BF-PHP+/Common/ASM-TLS-Fingerprinting.info.learning Slow brute Force attack has been reported To check the status of attack and Server stress level. Go to Security››DoS Protection››Protected Objects(This option is only available if you have AFM Provisioned) Find out the VS for which you would like to check the status and Click the arrow below Attack status. Once you click you will see detailed informed is displayed on the screen. For example as show below Server Stress is 100 now. If AFM is not provisioned you may run below mentioned command to check if the server is under stress. admd -s vs./Common/<VSname>+/Common/<DOSprofilename.sig.health> Server Stress value Range: If there is no traffic server value is 0.5 If server functions properly value is between (0,1) Value higher then 1 is considered as load and mitigation may be applied For example admd -s vs./Common/BF-PHP+/Common/ASM-TLS-Fingerprinting.sig.health You may continue to monitor the output using command line or GUI to find out if attack has started. To check if attack has started you may check using command line. If the value is 0,0 then there is no attack if the value is 1 VS is under attack admd -s vs./Common/<VSname>+/Common/<DOSprofilename.info> for example: admd -s vs./Common/BF-PHP+/Common/ASM-TLS-Fingerprinting.info Using the GUI Go to Security››DoS Protection:Protected Objects Note: (To get this view AFM should be provisioned ) If you continue to monitor you may notice that BADOs has started generating signature. But accuracy in start will not be 100% and it may take some time to become 100% accurate. Using CLI admd -s vs./Common/BF-PHP+/Common/ASM-TLS-Fingerprinting.info Using GUI Security››DoS Protection››Protected Objects(This option is only available if you have AFM Provisioned) If the Dynamic Signature status is unready the signature is not ready and does not have 100% accuracy. Note: (To get this view AFM should be provisioned, If AFM is not provisioned you may continue monitor using CLI ) Once signature is ready Dynamic signature status will change as shown below. Note: (To get this view AFM should be provisioned, If AFM is not provisioned you may continue monitor using CLI ) Once the signature’s accuracy is 100%, It will be available underSecurity››DoS Protection:Signatures >> Dynamic. As shown below. You may notice in above screenshot that Accuracy of signature is 100% where as approval status is Unapproved, If you want to use only approved signature (which we have used in this case) you need to click the check box infront of the signature, as soon as you will enable check box a window on right side will pop up and you may enable check box in-front of Approved and then press update to manually approve the signature. Note: User approved signatures only under Behavioral & Stress-based (D)DoS Detection in the DOS profile should be enable. Once you approve the signature, Signature approval state will change to manually approved as shown below You may also check DOS logs by checking Security››Event Logs››DoS›› Application Events Another Graphical view option for DOS can be checked by going to Security››Reporting:DoS:Dashboard If you want to check a specific attack ID then please on right side under Attack IDs find the attack ID and click on it. As soon as you will click on it page will show the data related to specific attack ID as shown below. As shown above during attack, TLS signature generate by Behavioural DOS ismitigating the attack and normal requests are still passing through using Behavioural attack signature. Note: By default, when the systemidentifies signature pattern anomalies, itsilently drops the connection. You can change the mitigation mode and force the system to send a reset(RST) when the traffic matches a signature pattern. To change the mitigation mode fromdrop to reset, perform the following steps: 1. Log in to tmsh by typing the following command: tmsh 2. To change themitigation mode to reset, typethe following command: modify sys db adm.mitigation.accelerated.signatures.drop.mode value reset Note: If you want to generate HTTP signature using BADOS instead of TLS signature in DOS protection profile you can select accelerated signature and rest of the steps will remain same.1.2KViews2likes0CommentsHTTP Brute Force Mitigation Playbook: Appendix
This is the HTTP Brute Force Mitigation Playbook: Appendix where some the sample configurations are located. Mitigation: TLS Fingerprint To use the TLS Fingerprint iRules, create separate iRules in the Configuration Utility for iRule 1 - FingerprintTLS proc and iRule 2- the rate limiting iRule. Apply TLS Fingerprint Rate Limiting iRule to a Virtual Server that needs to be protected. See "Mitigation: TLS Fingerprint" section in Chapter 3 - BIG-IP LTM Mitigation Options for HTTP Brute Force Attacks for sample internal and external Data Groups configuration. External Data Group fingerprint_db Internal Data Group malicious_fingerprintdb In edit mode, String and Value are exposed ## iRule #1 - FingerprintTLS-proc ## iRule #1 - FingerprintTLS-proc # from Library-Rule in # https://devcentral.f5.com/s/articles/tls-fingerprinting-a-method-for-identifying-a-tls-client-#without-decrypting-24598 ## TLS Fingerprint Procedure ################# ## ## Author: Kevin Stewart, 12/2016 ## Derived from Lee Brotherston's "tls-fingerprinting" project @ https://github.com/LeeBrotherston/tls-fingerprinting ## Purpose: to identify the user agent based on unique characteristics of the TLS ClientHello message ## Input: ## Full TCP payload collected in CLIENT_DATA event of a TLS handshake ClientHello message ## Record length (rlen) ## TLS outer version (outer) ## TLS inner version (inner) ## Client IP ## Server IP ############################################## proc fingerprintTLS { payload rlen outer inner clientip serverip } { ## The first 43 bytes of a ClientHello message are the record type, TLS versions, some length values and the ## handshake type. We should already know this stuff from the calling iRule. We're also going to be walking the ## packet, so the field_offset variable will be used to track where we are. set field_offset 43 ## The first value in the payload after the offset is the session ID, which may be empty. Grab the session ID length ## value and move the field_offset variable that many bytes forward to skip it. binary scan ${payload} @${field_offset}c sessID_len set field_offset [expr {${field_offset} + 1 + ${sessID_len}}] ## The next value in the payload is the ciphersuite list length (how big the ciphersuite list is. We need the binary ## and hex values of this data. binary scan ${payload} @${field_offset}S cipherList_len binary scan ${payload} @${field_offset}H4 cipherList_len_hex set cipherList_len_hex_text ${cipherList_len_hex} ## Now that we have the ciphersuite list length, let's offset the field_offset variable to skip over the length (2) bytes ## and go get the ciphersuite list. Multiple by 2 to get the number of appropriate hex characters. set field_offset [expr {${field_offset} + 2}] set cipherList_len_hex [expr {${cipherList_len} * 2}] binary scan ${payload} @${field_offset}H${cipherList_len_hex} cipherlist ## Next is the compression method length and compression method. First move field_offset to skip past the ciphersuite ## list, then grab the compression method length. Then move field_offset past the length (2) bytes and grab the ## compression method value. Finally, move field_offset past the compression method bytes. set field_offset [expr {${field_offset} + ${cipherList_len}}] binary scan ${payload} @${field_offset}c compression_len #set field_offset [expr {${field_offset} + ${compression_len}}] set field_offset [expr {${field_offset} + 1}] binary scan ${payload} @${field_offset}H[expr {${compression_len} * 2}] compression_type set field_offset [expr {${field_offset} + ${compression_len}}] ## We should be in the extensions section now, so we're going to just run through the remaining data and ## pick out the extensions as we go. But first let's make sure there's more record data left, based on ## the current field_offset vs. rlen. if { [expr {${field_offset} < ${rlen}}] } { ## There's extension data, so let's go get it. Skip the first 2 bytes that are the extensions length set field_offset [expr {${field_offset} + 2}] ## Make a variable to store the extension types we find set extensions_list "" ## Pad rlen by 1 byte set rlen [expr {${rlen} + 1}] while { [expr {${field_offset} <= ${rlen}}] } { ## Grab the first 2 bytes to determine the extension type binary scan ${payload} @${field_offset}H4 ext ## Store the extension in the extensions_list variable append extensions_list ${ext} ## Increment field_offset past the 2 bytes of the extension type set field_offset [expr {${field_offset} + 2}] ## Grab the 2 bytes of extension lenth binary scan ${payload} @${field_offset}S ext_len ## Increment field_offset past the 2 bytes of the extension length set field_offset [expr {${field_offset} + 2}] ## Look for specific extension types in case these need to increment the field_offset (and because we need their values) switch $ext { "000b" { ## ec_point_format - there's another 1 byte after length ## Grab the extension data binary scan ${payload} @[expr {${field_offset} + 1}]H[expr {(${ext_len} - 1) * 2}] ext_data set ec_point_format ${ext_data} } "000a" { ## elliptic_curves - there's another 2 bytes after length ## Grab the extension data binary scan ${payload} @[expr {${field_offset} + 2}]H[expr {(${ext_len} - 2) * 2}] ext_data set elliptic_curves ${ext_data} } "000d" { ## sig_alg - there's another 2 bytes after length ## Grab the extension data binary scan ${payload} @[expr {${field_offset} + 2}]H[expr {(${ext_len} - 2) * 2}] ext_data set sig_alg ${ext_data} } default { ## Grab the otherwise unknown extension data binary scan ${payload} @${field_offset}H[expr {${ext_len} * 2}] ext_data } } ## Increment the field_offset past the extension data length. Repeat this loop until we reach rlen (the end of the payload) set field_offset [expr {${field_offset} + ${ext_len}}] } } ## Now let's compile all of that data. set cipl [string toupper ${cipherList_len_hex_text}] set ciph [string toupper ${cipherlist}] set coml ${compression_len} set comp [string toupper ${compression_type}] if { ( [info exists extensions_list] ) and ( ${extensions_list} ne "" ) } { set exte [string toupper ${extensions_list}] } else { set exte "@@@@" } if { ( [info exists elliptic_curves] ) and ( ${elliptic_curves} ne "" ) } { set ecur [string toupper ${elliptic_curves}] } else { set ecur "@@@@" } if { ( [info exists sig_alg] ) and ( ${sig_alg} ne "" ) } { set siga [string toupper ${sig_alg}] } else { set siga "@@@@" } if { ( [info exists ec_point_format] ) and ( ${ec_point_format} ne "" ) } { set ecfp [string toupper ${ec_point_format}] } else { set ecfp "@@@@" } ## Initialize the match variable set match "" ## Now let's build the fingerprint string and search the database set fingerprint_str "${outer}+${inner}+${cipl}+${ciph}+${coml}+${comp}+${exte}+${ecur}+${siga}+${ecfp}" ## Un-comment this line to display the fingerprint string in the LTM log for troubleshooting #log local0. "${clientip}-${serverip}: fingerprint_str = ${fingerprint_str}" if { [class match ${fingerprint_str} equals fingerprint_db] } { ## Direct match set match [class match -value ${fingerprint_str} equals fingerprint_db] } elseif { not ( ${ciph} starts_with "C0" ) and not ( ${ciph} starts_with "00" ) } { ## Hmm.. there's no direct match, which could either mean a database entry doesn't exist, or Chrome (and Opera) are adding ## special values to the cipherlist, extensions list and elliptic curves list. ## ex. 9A9A, 5A5A, EAEA, BABA, etc. at the beginning of the cipherlist ## Let's strip out these anomalous values and try the match again. ## Substract 2 bytes from cipherlist length set cipl [format %04x [expr {{[expr 0x${cipl}] - 2}}]] ## Subtract 2 bytes from the front of the cipher list set ciph [string range ${ciph} 4 end] ## Subtract 2 bytes from the front of the extensions list set exte [string range ${exte} 4 end] ## There might be an additional random set in the string that needs to be removed (pattern is "(.)A\1A") regsub {(.)A\1A} ${exte} "" exte ## If the above regsub doesn't work, try the following: #regsub {(\wA)\1} ${exte} "" exte ## Subtract 2 bytes from the front of the elliptic curves list set ecur [string range ${ecur} 4 end] ## Rebuild the fingerprint string set fingerprint_str "${outer}+${inner}+${cipl}+${ciph}+${coml}+${comp}+${exte}+${ecur}+${siga}+${ecfp}" if { [class match ${fingerprint_str} equals fingerprint_db] } { ## Guess match set match [class match -value ${fingerprint_str} equals fingerprint_db] log local0. "guessing fingerprint ${fingerprint_str}" } else { ## No match set match "${fingerprint_str}" #set match "" log local0. "no matched fingerprint ${match}" } } else { set match "${fingerprint_str}" #log local0. "no matched fingerprint ${fingerprint_str}" } ## Return the matching user agent string return ${match} } ##irule #2 - FingerprintTLS-irule apply to VS ##irule #2 - FingerprintTLS-irule apply to VS ## define variables for rate limiting when RULE_INIT { # Default rate to limit requests set static::maxRate 15 # Default rate to set static::warnRate 12 # During this many seconds set static::timeout 1 } when CLIENT_ACCEPTED { ## Collect the TCP payload TCP::collect } when CLIENT_DATA { ## Get the TLS packet type and versions if { ! [info exists rlen] } { binary scan [TCP::payload] cH4ScH6H4 rtype outer_sslver rlen hs_type rilen inner_sslver if { ( ${rtype} == 22 ) and ( ${hs_type} == 1 ) } { ## This is a TLS ClientHello message (22 = TLS handshake, 1 = ClientHello) ## Call the fingerprintTLS proc from the FingerprintTLS-proc iRule to set the TLS fingerprint value with record length, outer and inner SSL version info, source and destination IP address details as input set fingerprint [call FingerprintTLS-proc::fingerprintTLS [TCP::payload] ${rlen} ${outer_sslver} ${inner_sslver} [IP::client_addr] [IP::local_addr]] ### Do Something here ### # set matched ${fingerprint} #log local0. "fingerprint is ${fingerprint}" # if { [class match $matched equals malicious_fingerprint] } { # log local0. "fingerprint is $matched dropped" # drop # } ########################### #rate limit logic #check if fingerprint matches an expected fingerprint if { [class match ${fingerprint} equals fingerprint_db] } { event disable all } elseif {![class match ${fingerprint} equals fingerprint_db] && [class match ${fingerprint} equals malicious_fingerprintdb]} { #check if fingerprint matches a known malicious fingerprint set malicious_fingerprint [class match -value ${fingerprint} equals malicious_fingerprintdb] drop log local0. "known malicious fingerprint matched $malicious_fingerprint" } else { set suspicious_fingerprint ${fingerprint} #rate limit fingerprint # Increment and Get the current request count bucket #set epoch [clock seconds] #monitor an unrecognized fingerprint and rate limit it set currentCount [table incr -mustexist "Count_[IP::client_addr]_${suspicious_fingerprint}"] if { $currentCount eq "" } { # Initialize a new request count bucket table set "Count_[IP::client_addr]_${suspicious_fingerprint}" 1 indef $static::timeout set currentCount 1 } # Actually check fingerprint for being over limit if { $currentCount >= $static::maxRate } { log local0. "ERROR: fingerprint:[IP::client_addr]_${suspicious_fingerprint} exceeded ${static::maxRate} requests per second. Rejecting request. Current requests: ${currentCount}." event disable all drop } if { $currentCount > $static::warnRate } { #log local0. "WARNING: fingerprint:[IP::client_addr]_${suspicious_fingerprint} exceeded ${static::warnRate} requests per second. Will reject at ${static::maxRate}. Current requests: ${currentCount}." } log local0. "fingerprint:[IP::client_addr]_${suspicious_fingerprint}: currentCount: ${currentCount}" } } } ### Do Something here ### # Collect the rest of the record if necessary if { [TCP::payload length] < $rlen } { TCP::collect $rlen } ## Release the paylaod TCP::release } #client_data close bracket when HTTP_REQUEST { if { $currentCount > $static::warnRate } { log local0. "WARNING: suspicious_fingerprint: [IP::client_addr]_${suspicious_fingerprint}: User-Agent:[HTTP::header User-Agent] exceeded ${static::warnRate} requests per second. Will reject at ${static::maxRate}. Current requests: ${currentCount}." } }674Views1like0Comments