apache
11 Topics2.5 bad ways to implement a server load balancing architecture
I'm in a bit of mood after reading a Javaworld article on server load balancing that presents some fairly poor ideas on architectural implementations. It's not the concepts that are necessarily wrong; they will work. It's the architectures offered as a method of load balancing made me do a double-take and say "What?" I started reading this article because it was part 2 of a series on load balancing and this installment focused on application layer load balancing. You know, layer 7 load balancing. Something we at F5 just might know a thing or two about. But you never know where and from whom you'll learn something new, so I was eager to dive in and learn something. I learned something alright. I learned a couple of bad ways to implement a server load balancing architecture. TWO LOAD BALANCERS? The first indication I wasn't going to be pleased with these suggestions came with the description of a "popular" load-balancing architecture that included two load balancers: one for the transport layer (layer 4) and another for the application layer (layer 7). In contrast to low-level load balancing solutions, application-level server load balancing operates with application knowledge. One popular load-balancing architecture, shown in Figure 1, includes both an application-level load balancer and a transport-level load balancer. Even the most rudimentary, entry level load balancers on the market today - software and hardware, free and commercial - can handle both transport and application layer load balancing. There is absolutely no need to deploy two separate load balancers to handle two different layers in the stack. This is a poor architecture introducing unnecessary management and architectural complexity as well as additional points of failure into the network architecture. It's bad for performance because it introduces additional hops and points of inspection through which application messages must flow. To give the author credit he does recognize this and offers up a second option to counter the negative impact of the "additional network hops." One way to avoid additional network hops is to make use of the HTTP redirect directive. With the help of the redirect directive, the server reroutes a client to another location. Instead of returning the requested object, the server returns a redirect response such as 303. I found it interesting that the author cited an HTTP response code of 303, which is rarely returned in conjunction with redirects. More often a 302 is used. But it is valid, if not a bit odd. That's not the real problem with this one, anyway. The author claims "The HTTP redirect approach has two weaknesses." That's true, it has two weaknesses - and a few more as well. He correctly identifies that this approach does nothing for availability and exposes the infrastructure, which is a security risk. But he fails to mention that using HTTP redirects introduces additional latency because it requires additional requests that must be made by the client (increasing network traffic), and that it is further incapable of providing any other advanced functionality at the load balancing point because it essentially turns the architecture into a variation of a DSR (direct server return) configuration. THAT"S ONLY 2 BAD WAYS, WHERE'S THE .5? The half bad way comes from the fact that the solutions are presented as a Java based solution. They will work in the sense that they do what the author says they'll do, but they won't scale. Consider this: the reason you're implementing load balancing is to scale, because one server can't handle the load. A solution that involves putting a single server - with the same limitations on connections and session tables - in front of two servers with essentially the twice the capacity of the load balancer gains you nothing. The single server may be able to handle 1.5 times (if you're lucky) what the servers serving applications may be capable of due to the fact that the burden of processing application requests has been offloaded to the application servers, but you're still limited in the number of concurrent users and connections you can handle because it's limited by the platform on which you are deploying the solution. An application server acting as a cluster controller or load balancer simply doesn't scale as well as a purpose-built load balancing solution because it isn't optimized to be a load balancer and its resource management is limited to that of a typical application server. That's true whether you're using a software solution like Apache mod_proxy_balancer or hardware solution. So if you're implementing this type of a solution to scale an application, you aren't going to see the benefits you think you are, and in fact you may see a degradation of performance due to the introduction of additional hops, additional processing, and poorly designed network architectures. I'm all for load balancing, obviously, but I'm also all for doing it the right way. And these solutions are just not the right way to implement a load balancing solution unless you're trying to learn the concepts involved or are in a computer science class in college. If you're going to do something, do it right. And doing it right means taking into consideration the goals of the solution you're trying to implement. The goals of a load balancing solution are to provide availability and scale, neither of which the solutions presented in this article will truly achieve.322Views0likes1CommentMitigating The Apache Struts ClassLoader Manipulation Vulnerabilities Using ASM
Background Recently the F5 security research team has witnessed a series of CVE’s created for the popular Apache Struts platform. From Wikipedia: Apache Struts was an open-source web application framework for developing Java EE web applications. It uses and extends the Java Servlet API to encourage developers to adopt a model–view–controller (MVC) architecture. It was originally created by Craig McClanahan and donated to the Apache Foundation in May, 2000. Formerly located under the Apache Jakarta Project and known as Jakarta Struts, it became a top-level Apache project in 2005. The initial CVE-2014-0094 disclosed a critical vulnerability that allows an attacker to manipulate ClassLoader by using the ‘class’ parameter, which is directly mapped to the getClass() method through the ParametersInterceptor module in the Struts framework. The Apache Struts security bulletin recommended upgrading to Struts 2.3.16.1 to mitigate the vulnerability. Alternatively, users were also able to mitigate this vulnerability using a configuration change on their current Struts installations. The mitigation included adding the following regular expression to the list of disallowed parameters in ParametersInterceptor: '^class\.*' After several weeks, the solution was found to be incomplete, and sparked four new CVE’s: CVE-2014-0112, CVE-2014-0113, CVE-2014-0114 and CVE-2014-0116. Note: During the initial release of this article, CVE-2014-0114 andCVE-2014-0116 were not yet publicly disclosed, and weren't mentioned in this article. The article has now been edited to include mitigation for these CVEs as well. CVE-2014-0112 mentions the ClassLoader vulnerability still existing in parameters, and the security advisory for it suggests a new regular expression to include in the ParametersInterceptor config: (.*\.841Views0likes0CommentsUS FEDERAL: Enabling Kerberos for Smartcard Authentication to Apache.
The following provides guidance on the configuration of BIG-IP Local Traffic Manager and Access Policy Manager in support of Apache Web Server Smartcard / Kerberos access using Active Directory as the Key Distribution Center. This content is part of a series developed to address the configuration of non IIS webservers to support Kerberos Single Sign On and therefore smartcard access, but should be relevant anywhere SSO utilizing Kerberos is needed. Several assumptions are made concerning the implementation of Active Directory, PKI, and the Linux Distro(s) used. Base Software Requirements The following base requirements are assumed for this configuration. Microsoft Windows Server 2008 R2 (Active Directory) BIG-IP LTM 11.4 or higher (the configuration items will probably work with most versions of 11 but only 11.4 and 11.5 were tested in the scenario) Ubuntu Server 13.10 (This is a fairly simple and user friendly distro based on Debian, this was also tested in RHEL/CentOS.) This config will work in other distro’s of Linux, but posting all the difference configurations would just be redundant. If you need help, reach out to the US Federal Team. How it Works The configuration of this scenario is fairly simple. The majority of the configuration and testing will most likely reside on the Linux side. The client access and authenticates to APM via a smartcard. Depending on the method of choice, an attribute identifying the user is extracted from the certificate and validated against an AD/LDAP. In Federal, this step has two purposes; to extract the UPN to query AD for the User (EDIPI@MIL), and to retrieve the sAMAccountName to use for the Kerberos Principal. Once the user has been validated and the sAMAccountName retrieved, the session variables are assigned and the user is granted access. Base Linux Configuration Configure Static IP & DNS You can use the text editor of your own preference, but I like nano so that is what I will document. sudo nano /etc/network/interfaces You will want to change iface eth0 inet dhcp to static, and change the network settings to match your environment. Since this scenario uses Windows AD as the KDC, you will want to make sure your DNS points to a domain controller. auto eth0 iface eth0 inet static address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.1 Note: Depending on your distro, you will use dns-nameservers or resolv.conf. I also removed the DHCP client entirely. (Not necessary, but I like to clean out things I wont ever use.) Restart networking sudo /etc/init.d/networking restart Or sudo service networking restart Install LAMP (Linux, Apache, MySQL, PHP) In Ubuntu, this is fairly simple, you can just do the following. sudo tasksel Then check the box for LAMP, and follow the on-screen instructions, set MySQL password, and then you are done. If you access the IP of your server from a browser, you will see the default Apache "It Works!" page. Install & Configure Kerberos sudo apt-get install krb5-user Some distros will ask for default REALM, KDC, and Admin server configs. In my case it is F5LAB.LOCAL, 192.168.1.5, 192.168.1.5. krb5.conf Depending on your distro, there will be a ton of extra settings in the krb5.conf file, some related to Heimdal and some for MIT Kerberos. The core settings that I needed for success are listed below. [libdefaults] Set your default realm, DNS lookups to true, and validate the encryption types. HMAC is good, Windows does not have DES enabled by default and you should not consider enabling it. default_realm = F5LAB.LOCAL dns_lookup_realm = true dns_lookup_kdc = true ticket_lifetime = 24h forwardable = true default_tgs_enctypes = arcfour-hmac-md5 des-cbc-crc des3-hmac-sha1 default_tkt_enctypes = arcfour-hmac-md5 des-cbc-crc des3-hmac-sha1 [realms] KDC: Domain Controller admin_server: Not required, but can also point o Domain Controller default_domain: Kerberos Realm F5LAB.LOCAL = { kdc = 192.168.1.5:88 admin_server = 192.168.1.5 default_domain = F5LAB.LOCAL } Install Mod_Auth_Kerb This is required to make Apache support Kerberos. Some distros include this when you load apache, but here is how you make sure. sudo apt-get install libapache2-mod-auth-kerb Testing Lets make sure that we configured networking and Kerberos properly. Use KINIT to test a known user account. This should reach out to the KDC to get a ticket for the user. REALMS are case sensitive, so make sure its all upper case. The following will request a password for the user, and if everything is set up properly, there will be no response. kinit mcoleman@F5LAB.LOCAL You can run KLIST to see your ticket. klist An example of what happens when the REALM is entered incorrectly: KDC reply did no match expectations while getting initial credentials. Windows Configurations Configuring SPNs Since Linux is not the KDC or Admin server, this is done on the Active Directory side. Create a user account for each application, with the appropriate Service Principal Names. Be aware, when we run keytab, all SPNs will be overwritten, with the exception of the SPN used in the command. Crypto Pay attention to the encryption types that are / were enabled in the krb5.conf file. It is important to remember that both DES cipher suites (DES-CBC-MD5 & DES-CBC-CRC) are disabled by default in Windows 7.The following cipher suites are enabled by default in Windows 7 and Windows Server 2008 R2: AES256-CTS-HMAC-SHA1-96 AES128-CTS-HMAC-SHA1-96 RC4-HMAC For the purposes of this guide and the available settings in Windows use RC4-HMAC. DO NOT enable DES on Windows. Create a Keytab Keytabs can be created in windows by using ktpass. A keytab is a file that contains a Kerberos Principal, and encrypted keys. The purpose is to allow authentication via Kerberos, without using a password. ktpass –princ HTTP/lamp.f5lab.local@F5LAB.LOCAL -mapuser F5LAB\apache.svc -crypto RC4-HMAC-NT -pass pass@word1 -ptype KRB5_NT_PRINCIPAL -kvno 0 -out LAMP.keytab Copy the keytab to your linux server(s). For my use case I put the keytab at /etc/apach2/auth/apache2.keytab Lock it down - Linux The security of a keytab is pretty important. Malicious users with access to keytabs can impersonate network services. To avoid this, we can secure the keytab’s permissions. sudo chown www-data:www-data /etc/apache2/auth/apache2.keytab sudo chmod 400 /etc/apache2/auth/apache2.keytab Testing Now, we want to make sure everything is looking alright so far. So lets make sure the keytab looks right, and we can authenticate properly against the KDC. List the contents of the Keytab klist –ke /etc/apache2/auth/apache2.keytab Test Authentication with the S4U SPN The following commands can be used to initialize the credential cache for the S4U proxy account and then to test authentication with a user account. kinit –f http/lamp.f5lab.local@F5LAB.LOCAL kvno http/lamp.f5lab.local@F5LAB.LOCAL sudo klist –e –k –t /etc/apache2/auth/apache2.keytab kvno –C –U mcoleman http/lamp.f5lab.local Apache Configurations I was able to get authentication working by adding the following to the default site. In Ubuntu its /etc/apache2/sites-enabled/000-default.conf. <VirtualHost *:80> … <Location /> Options Indexes AllowOverride None Order allow,deny allow from allAuthType Kerberos #KrbServiceName HTTP/lamp.f5lab.local@F5LAB.LOCAL AuthName "Kerberos Logon" KrbMethodNegotiate on KrbMethodK5Passwd on KrbVerifyKDC off KrbAuthRealm F5LAB.LOCAL Krb5KeyTab /etc/apache2/auth/apache2.keytab require valid-user </Location> </VirtualHost> BIG-IP Configurations This portion is actually pretty straightforward. Configure a standard Virtual Server with a Pool pointing at the Apache Servers. Configuration Items • Kerberos SSO Profile – This is used to authenticate to Apache. • Access Profile – The Access profile binds all of the APM resources. • iRule – an iRule is used to extract the smartcard certificate User Principal Name (UPN). • ClientSSL Profile - This is used to establish a secure connection between the user and the APM VIP. Apply the server certificate, key, and a trusted certificate authority’s bundle file. All other settings can be left at default. • HTTP profile – This is required for APM to function. A generic HTTP profile will do. • SNAT profile – Depending on other network factors, a SNAT profile may or may not be necessary in a routed environment. If the backend servers can route directly back to the clients, bypassing the BIG-IP, then a SNAT is required. • Virtual server –The virtual server must use an IP address accessible to client traffic. Assign a listener (destination) IP address and port, the HTTP profile, the client SSL profile, a SNAT profile (as required), the access profile, and the iRule. Modify the krb5.conf [libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = true dns_lookup_kdc = true ticket_lifetime = 24h forwardable = yes APM Kerberos SSO Profile Create an APM Kerberos SSO profile like the one shown below. Change the Username Source to “session.logon.last.username”, enter the Active Directory domain name (in all upper case), enter the full service principal name of the AD user service account previously created , and enter the account’s password. The only real change from IIS is the Send Authorization setting, which should be set to “On 401 Status Code.” Username Source: session.logon.last.username User REALM Source: session.logon.last.domain Kerberos REALM: F5LAB.COM KDC(optional): Account Name: HTTP/lamp.f5lab.com Account Password: password Confirm Account Password: password SPN Pattern (optional): Send Authorization: On 401 Status Note: The full service principal name includes the service type (ex. host/), the service name (ex. krbsrv.alpha.com), and the domain realm name (ex. @ALPHA.COM – in upper case). KDC can be specified, but is not needed unless you do not configure DNS lookup enabled in the krb5.conf on the F5. Basically, if you dont tell the F5 how to resolve the KDC, then you need to specify one. SPN Pattern can help resolve issues if you have issues with DNS/rDNS. You can specify which SPN you want to sent with either a designated, or dynamic option. VPE configuration The components of the VPE are as follows: • On-Demand Cert Auth – Set this to Require. • Rule event – Set the ID to “CERTPROC” to trigger the EDIPI extraction iRule code. • LDAP Query – Validates the UPN and retrieves sAMAccountName. Basic CAC iRule when ACCESS_ACL_ALLOWED { #Set Username to value of sAMAccountName extracted from LDAP Query. ACCESS::session data set session.logon.last.username [ACCESS::session data get "session.ldap.last.attr.sAMAccountName"] } when ACCESS_POLICY_AGENT_EVENT { switch [ACCESS::policy agent_id] { #Name of iRule event called from APM Policy "CERTPROC" { if { [ACCESS::session data get session.ssl.cert.x509extension] contains "othername:UPN<" } { #Set temporary session variable to value extracted from X.509 data. set tmpupn [findstr [ACCESS::session data get session.ssl.cert.x509extension] "othername:UPN<" 14 ">"] ACCESS::session data set session.custom.certupn $tmpupn #log local0. "Extracted OtherName Field: $tmpupn" } } } } Put it together. Now that all the functional parts are in place, you can test access to Apache. If you want to add some code to see what user is hitting your application, you can create a small PHP page containing the following code. $_SERVER['REMOTE_USER'] $_SERVER['KRB5CCNAME'] The server variables will echo the current authenticated user name. Troubleshooting Kerberos is fairly fault-tolerant, if the requisite services are in place. That being said, it can be a PITA to troubleshoot. If Kerberos authentication fails, check the following: The user has a valid ticket. Use klist, kinit, and kvno as explained previously. Validate basic network connectivity. DNS (Forward & Reverse), ensure no duplicate A or PTR records. This can be overwritten in the Keberos SSO profile SPN pattern settings. Verify the clocks of the KDC and local server are synced. Turn APM SSO logging up to debug and tail the APM logs (tail -f /var/log/apm). Questions? Contact the US Federal team, Federal [at] f5.com.1.2KViews0likes0Comments4 reasons not to use mod-security
Apache is a great web server if for no other reason than it offers more flexibility through modules than just about any other web server. You can plug-in all sorts of modules to enhance the functionality of Apache. But as I often say, just because you can doesn't mean you should. One of the modules you can install is mod_security. If you aren't familiar with mod_security, essentially it's a "roll your own" web application firewall plug-in for the Apache web server. Some of the security functions you can implement via mod_security are: Simple filtering Regular Expression based filtering URL Encoding Validation Unicode Encoding Validation Auditing Null byte attack prevention Upload memory limits Server identity masking Built in Chroot support Using mod_security you can also implement protocol security, which is an excellent idea for ensuring that holes in protocols aren't exploited. If you aren't sold on protocol security you should read up on the recent DNS vulnerability discovered by Dan Kaminsky - it's all about the protocol and has nothing to do with vulnerabilities introduced by implementation. mod_security provides many options for validating URLs, URIs, and application data. You are, essentially, implementing a custom web application firewall using configuration directives. If you're on this path then you probably agree that a web application firewall is a good thing, so why would I caution against using mod_security? Well, there's four reasons, actually. It runs on every web server. This is an additional load on the servers that can be easily offloaded for a more efficient architecture. The need for partial duplication of configuration files across multiple machines can also result in the introduction of errors or extraneous configuration that is unnecessary. Running mod_security on every web server decreases capacity to serve users and applications accordingly, which may require additional servers to scale to meet demand. You have to become a security expert. You have to understand the attacks you are trying to stop in order to write a rule to prevent them. So either you become an expert or you trust a third-party to be the expert. The former takes time and that latter takes guts, as you're introducing unnecessary risk by trusting a third-party. You have to become a protocol expert. In addition to understanding all the attacks you're trying to prevent, you must become an expert in the HTTP protocol. Part of providing web application security is to sanitize and enforce the HTTP protocol to ensure it isn't abused to create a hole where none previously appeared. You also have to become an expert in Apache configuration directives, and the specific directives used to configure mod_security. The configuration must be done manually. Unless you're going to purchase a commercially supported version of mod_security, you're writing complex rules manually. You'll need to brush up on your regular expression skills if you're going to attempt this. Maintaining those rules is just as painful, as any update necessarily requires manual intervention. Of course you could introduce an additional instance of Apache with mod_security installed that essentially proxies all requests through mod_security, thus providing a centralized security architecture, but at that point you've just introduced a huge bottleneck into your infrastructure. If you're already load-balancing multiple instances of a web site or application, then it's not likely that a single instance of Apache with mod_security is going to be able to handle the volume of requests without increasing downtime or degrading performance such that applications might as well be down because they're too painful to use. Centralizing security can improve performance, reduce the potential avenues of risk through configuration error, and keeps your security up-to-date by providing easy access to updated signatures, patterns, and defenses against existing and emerging web application attacks. Some web application firewalls offer pre-configured templates for specific applications like Microsoft OWA, providing a simple configuration experience that belies the depth of security knowledge applied to protected the application. Web application firewalls can enable compliance with requirement 6.6 of PCI DSS. And they're built to scale, which means the scenario in which mod_security is used as a reverse proxy to protect all web servers from harm but quickly becomes a bottleneck and impediment to performance doesn't happen with purpose-built web application firewalls. If you're considering using mod_security then you already recognize the value of and need for a web application firewall. That's great. But consider carefully where you will deploy that web application firewall, because the decision will have an impact on the performance and availability of your site and applications.1.6KViews0likes7CommentsI am wondering why not all websites enabling this great feature GZIP?
Understanding the impact of compression on server resources and application performance While doing some research on a related topic, I ran across this question and thought “that deserves an answer” because it certainly seems like a no-brainer. If you want to decrease bandwidth – which subsequently decreases response time and improves application performance – turn on compression. After all, a large portion of web site traffic is text-based: CSS, JavaScript, HTML, RSS feeds, which means it will greatly benefit from compression. Typical GZIP compression affords at least a 3:1 reduction in size, with hardware-assisted compression yielding an average of 4:1 compression ratios. That can dramatically affect the response time of applications. As I said, seems like a no-brainer. Here’s the rub: turning on compression often has a negative impact on capacity because it is CPU-bound and under certain conditions can actually cause a degradation in performance due to the latency inherent in compressing data compared to the speed of the network over which the data will be delivered. Here comes the science. IMPACT ON CPU UTILIZATION Compression via GZIP is CPU bound. It requires a lot more CPU than you might think. The larger the file being compressed, the more CPU resources are required. Consider for a moment what compression is really doing: it’s finding all similar patterns and replacing them with representations (symbols, indexes into a table, etc…) to a single instance of the text instead. So it makes sense that the larger a file is, the more resources are required – RAM and CPU – to execute such a process. Of course the larger the file is the more benefit you see from compression in terms of bandwidth and improvement in response time. It’s kind of a Catch-22: you want the benefits but you end up paying in terms of capacity. If CPU and RAM is being chewed up by the compression process then the server can handle fewer requests and fewer concurrent users. You don’t have to take my word for it – there are quite a few examples of testing done on web servers and compression that illustrate the impact on CPU utilization. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications They all essentially say the same thing; if you’re serving dynamic content (or static content and don’t have local caching on the web server enabled) then there is a significant negative impact on CPU utilization that occurs when enabling GZIP/compression for web applications. Given the exceedingly dynamic nature of Web 2.0 applications, the use of AJAX and similar technologies, and the data-driven world in which we live today, that means there are very few types of applications running on web servers for which compression will not negatively impact the capacity of the web server. In case you don’t (want || have time) to slog through the above articles, here’s a quick recap: File Size Bandwidth decrease CPU utilization increase IIS 7.0 10KB 55% 4x 50KB 67% 20x 100KB 64% 30x Apache 2.2 10KB 55% 4x 50KB 65% 10x 100KB 63% 30x It’s interesting to note that IIS 7.0 and Apache 2.2 mod_deflate have essentially the same performance characteristics. This data falls in line with the aforementioned Intel report on HTTP compression which noted that CPU utilization was increased 25-35% when compression was enabled. So essentially when you enable compression you are trading its benefits – bandwidth reduction, response time improvement – for a reduction in capacity. You’re robbing Peter to pay Paul, because instead of paying for bandwidth you’re paying for more servers to handle the same load. THE MYTH OF IMPROVED RESPONSE TIME One of the reasons you’d want to compress content is to improve response time by decreasing the total number of packets that have to traverse a wire. This is a necessity when transferring content via a WAN, but can actually cause a decrease in performance for application delivery over the LAN. This is because the time it takes to compress the content and then deliver it is actually greater than the time to just transfer the original file via the LAN. The speed of the network over which the content is being delivered is highly relevant to whether compression yields benefits for response time. The increasing consumption of CPU resources as volume increases, too, has a negative impact on the ability of the server to process and subsequently respond, which also means an increase in application response time, which is not the desired result. Maybe you’re thinking “I’ll just get more CPU then. After all, there’s like billion core servers out there, that ought to solve the problem!” Compression algorithms, like FTP, are greedy. FTP will, if allowed, consume as much bandwidth as possible in an effort to transfer data as quickly as possible. Compression will do the same thing to CPU resources: consume as much as it can to perform its task as quickly as possible. Eventually, yes, you’ll find a machine with enough cores to support both compression and capacity needs, but at what cost? It may well have been more financially efficient to invest in a better solution (that also brings additional benefits to the table) than just increasing the size of the server. But hey, it’s your data, you need to do what you need to do. The size of the content, too, has an impact on whether compression will benefit application performance. Consider that the goal of compression is to decrease the number of packets being transferred to the client. Generally speaking, the standard MTU for most network is 1500 bytes because that’s what works best with ethernet and IP. That means you can assume around 1400 bytes per packet available to transfer data. That means if content is 1400 bytes or less, you get absolutely no benefit out of compression because it’s already going to take only one packet to transfer; you can’t really send half-packets, after all, and in some networks packets that are too small can actually freak out some network devices because they’re optimized to handle the large content being served today – which means many full packets. TO COMPRESS OR NOT COMPRESS There is real benefit to compression; it’s part of the core techniques used by both application acceleration and WAN application delivery services to improve performance and reduce costs. It can drastically reduce the size of data and especially when you might be paying by the MB or GB transferred (such as applications deployed in cloud environments) this a very important feature to consider. But if you end up paying for additional servers (or instances in a cloud) to make up for the lost capacity due to higher CPU utilization because of that compression, you’ve pretty much ended up right where you started: no financial benefit at all. The question is not if you should compress content, it’s when and where and what you should compress. The answer to “should I compress this content” almost always needs to be based on a set of criteria that require context-awareness – the ability to factor into the decision making process the content, the network, the application, and the user. If the user is on a mobile device and the size of the content is greater than 2000 bytes and the type of content is text-based and … It is this type of intelligence that is required to effectively apply compression such that the greatest benefits of reduction in costs, application performance, and maximization of server resources is achieved. Any implementation that can’t factor all these variables into the decision to compress or not is not an optimal solution, as it’s just guessing or blindly applying the same policy to all kinds of content. Such implementations effectively defeat the purpose of employing compression in the first place. That’s why the answer to where is almost always “on the load-balancer or application delivery controller”. Not only are such devices capable of factoring in all the necessary variables but they also generally employ specialized hardware designed to speed up the compression process. By offloading compression to an application delivery device, you can reap the benefits without sacrificing performance or CPU resources. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications The Context-Aware Cloud The Revolution Continues: Let Them Eat Cloud Nerd Rage686Views0likes2CommentsUsing "X-Forwarded-For" in Apache or PHP
An issue that often comes up for users of any full proxy-based product is that the original client IP address is often lost to the application or web server. This is because in a full proxy system there are two connections; one between the client and the proxy, and a second one between the proxy and the web server. Essentially, the web server sees the connection as coming from the proxy, not the client. Needless to say, this can cause problems if you want to know the IP address of the real client for logging, for troubleshooting, for tracking down bad guys, or performing IP address specific tasks such as geocoding. Maybe you're just like me and you're nosy, or you're like Don and you want the webalizer graphs to be a bit more interesting (just one host does not a cool traffic graph make, after all!). That's where the "X-Forwarded-For" HTTP header comes into play. Essentially the proxy can, if configured to do so, insert the original client IP address into a custom HTTP header so it can be retrieved by the server for processing. If you've got a BIG-IP you can simply enable the ability to insert the "X-Forwarded-For" header in the http profile. Check out the screen shot below to see just how easy it is. Yeah, it's that easy. If for some reason you can't enable this feature in the HTTP profile, you can write an iRule to do the same thing. when HTTP_REQUEST { HTTP::header insert "X-Forwarded-For" [IP::client_addr]} Yeah, that's pretty easy, too. So now that you're passing the value along, what do you do with it? Modifying Apache's Log Format Well, Joe has a post describing how to obtain this value in IIS. But that doesn't really help if you're not running IIS and like me have chosen to run a little web server you may have heard of called Apache. Configuring Apache to use the X-Forwarded-For instead of (or in conjunction with) the normal HTTP client header is pretty simple. ApacheWeek has a great article on how to incorporate custom fields into a log file, but here's the down and dirty. Open your configuration file (usually in /etc/httpd/conf/) and find the section describing the log formats. Then add the following to the log format you want to modify, or create a new one that includes this to extract the X-Forwarded-For value: %{X-Forwarded-For}i That's it. If you don't care about the proxy IP address, you can simply replace the traditional %h in the common log format with the new value, or you can add it as an additional header. Restart Apache and you're ready to go. Getting the X-Forwarded-For from PHP If you're like me, you might have written an application or site in PHP and for some reason you want the real client IP address, not the proxy IP address. Even though my BIG-IP has the X-Forwarded-For functionality enabled in the http profile, I still need to access that value from my code so I can store it in the database. $headers = apache_request_headers(); $real_client_ip = $headers["X-Forwarded-For"]; That's it, now I have the real IP address of the client, and not just the proxy's address. Happy Coding & Configuring! Imbibing: Coffee3.5KViews0likes8CommentsF5 Friday: Zero-Day Apache Exploit? Zero-Problem
#infosec A recently discovered 0-day Apache exploit is no problem for BIG-IP. Here’s a couple of different options using F5 solutions to secure your site against it. It’s called “Apache Killer” and it’s yet another example of exploiting not a vulnerability, but a protocol’s behavior. UPDATE (8/26/2011) We're hearing that other Range-* HTTP headers are also vulnerable. Take care to secure against these potential attack vectors as well! In this case, the target is Apache and the “vulnerability” is in the way multiple ranges are handled by the Apache HTTPD server. The RANGE HTTP header is used to request one or more sub-ranges of the response, instead of the entire response entity. Ranges are sometimes used by thin clients (an example given was an eReader) that are memory constrained and may want to display just portions of the web page. Generally speaking, multiple byte ranges are not used very often. RFC 2616 Section 14.35.2 (Range retrieval request) explains: HTTP retrieval requests using conditional or unconditional GET methods MAY request one or more sub-ranges of the entity, instead of the entire entity, using the Range request header, which applies to the entity returned as the result of the request: Range = "Range" ":" ranges-specifier A server MAY ignore the Range header. However, HTTP/1.1 origin servers and intermediate caches ought to support byte ranges when possible, since Range supports efficient recovery from partially failed transfers, and supports efficient partial retrieval of large entities. The attack is simple. It’s a simple HTTP request with lots – and lots – of ranges. While this example uses the HEAD method, it can also be used with a GET. HEAD / HTTP/1.1 Host:xxxx Range:bytes=0-,5-1,5-2,5-3,… According to researchers testing the vulnerability, a successful attack requires a “modest” number of requests. BIG-IP SOLUTIONS There are several options to prevent this attack using BIG-IP solutions. HEADER SANITIZATION First, you can modify the HTTP profile to simply remove the Range header. HTTP header removal – and replacement – is a common means of manipulating request and response headers as a means to “fix” broken applications, clients, or enable other functionality. This is a form of header sanitization, used typically to remove non-compliant header values that may or may not be malicious, but are undesirable. The Apache suggestion is to remove any Range header with 5 or more values. Note that this could itself break clients whose functionality expects a specific data set as specified by the RANGE header. As it is a rarely used header it is unlikely to impact clients adversely, but caution is always advised. Collaborate with developers and understand the implications before arbitrarily removing HTTP headers that may be necessary to application functionality. HEADER VALUE SCRUBBING You can also use an iRule to scrub the headers. By inspecting and thus detecting large numbers of ranges in the RANGE header, you can subsequently handle the request based on your specific needs. Possible reactions include removal of the header, rejection of the request, redirection to a honey pot, or replacement of the header. Sample iRule code (always test before deploying into production!) when HTTP_REQUEST { # remove Range requests for CVE-2011-3192 if more than 5 ranges are requested if { [HTTP::header "Range"] matches_regex {bytes=(([0-9\- ])+,){5,}} } { HTTP::header remove Range } } Again, changing an HTTP header may have negative consequences on the functionality of the application and/or client, so tread carefully. BIG-IP ASM ATTACK SIGNATURE Another method of mitigation using BIG-IP solutions is to use a BIG-IP Application Security Manager (ASM) attack signature to detect and act upon an attack using this technique. The signature to add looks like: pcre:"/Range:[\t ]*bytes=(([0-9\- ])+,){5,}/Hi"; It is important to be aware of this exploit and how it works, as it is likely that once it is widely mitigated, attacks will begin (if they already are not) to explore the ways in which this header can be exploited. There are multiple “range” style headers, any of which may be vulnerable to similar exploitation, so it may be time to consider your current security strategy and determine whether the field of potential exploitable headers is such that a more negative approach (default deny unless specifically allowed) may be required to secure against future DoS attacks targeting HTTP headers. There are also alternative solutions available already, including this writeup from SpiderLabs with a link to an OWASP mod_security rule file for mitigations. Stay safe out there! Apache Warns Web Server Admins of DoS Attack Tool The Many Faces of DDoS: Variations on a Theme or Two How To Limit URI Length Without Recompiling Apache F5 Friday: Multi-Layer Security for Multi-Layer Attacks F5 Friday: Mitigating the ‘Padding Oracle’ Exploit for ASP.NET F5 Friday: The Art of Efficient Defense The Infrastructure 2.0–Security Connection F5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy517Views0likes1CommentHow To Limit URI Length Without Recompiling Apache
Use network-side scripting, of course! While just about every developer and information security professional knows that a buffer-overflow exploit can result in the execution of malicious code not many truly grok the “why”. Fortunately, it’s not really necessary for either one to be able to walk through the execution stack and trace the byte-code as it overwrites registers and then jumps to execute it. They know it’s A Very Bad Thing™ and perhaps more importantly they know how to stop it. SECONDARY and TERTIARY DEFENSE REQUIRED The best place to prevent a buffer-overflow vulnerability is in the application code. Never trust input whether from machine or man. Period. A simple check on the length of a string-based parameter can prevent vulnerabilities that may exist at the platform or operating system layer from being exploited. That’s true of almost all vulnerabilities, not just buffer overruns, by the way. An overly long input parameter could be an attempt at XSS or an SQLi, as well. Both tend to extend the “normal” data and while often obfuscated, the sheer length of such strings can indicate the presence of something malicious and almost certainly something that should be investigated. Assuming for whatever reason that this isn’t possible (and we know from research and surveys and live data from organizations like WhiteHat Security that it isn’t for a number of very valid reasons) it is likely the case that information security and operational administrators will go looking for a secondary defense. As the majority of applications today are deployed as web applications, that generally means looking to the web or application server for limitations on URL and/or parameter lengths as those are the most likely avenues of attack. One defense can be easily found if you’re deploying on Apache in the “LimitRequestLine” compilation directive. Yes, I said compilation directive. You’ll have to recompile Apache but that’s what open source is all about, right? Customization. Rapid solutions to security vulnerabilities. Agile infrastructure. While you’re in there, you might want to consider evaluating the “LimitRequestFields” and “LimitRequestFieldSize” variables, too. These two variables control the number of HTTP headers allowed as well as the length of a header field and could potentially prevent an exploit of the platform and underlying operating system coming in through the HTTP headers. Yes, such exploits do exist, and as always, better safe than sorry. While all certainly true and valid statements regarding open source software the reality is that changing the core platform code results in a long-term commitment to re-applying those changes every time the core platform is upgraded or patched. Ask an enterprise PeopleSoft developer how that has worked for them over the past decade or so – but fair warning, you’ll want to be out of spitting range when you do. Compilation has a secondary negative – it’s not agile, even though open source is. If you run into a situation in which you need to change these values you’re going to have to recompile, retest, and redeploy. And you’re going to have to regression test every application deployed on that particular modified platform. Starting to see why the benefit of customization in open source software is rarely truly taken advantage of? There’s a better way, a more agile way, a flexible way. NETWORK-SIDE SCRIPTING Whenever a solution involves the inspection and potential rejection or modification of HTTP-borne data based on, well, attributes like length, encoding, schema, etc… it should ring a bell and give pause for thought. These are variables in every sense of the word. If you decide to restrict input based on these values you necessarily open yourself up to maintaining and, in particular, updating those limitations across the life of the application in question. Thus it seems logical that the actual implementation of such limitations would leverage a location and solution that has as little impact on the rest of the infrastructure as possible. You want to maximize the coverage of the implementation solution while minimizing the disruption and impact on the infrastructure. There happens to be a strategic point of control that very much fits this description: centralization of functionality at a point of aggregation that maximizes coverage while minimizing disruption. As an added bonus the coverage is platform-agnostic, which means Apache, IIS, can be automagically covered without modifying the platforms themselves. That strategic point in the architecture is a network-side scripting enabled application delivery controller (the Load balancer for you old skool operations folks). See, when you take advantage of a full-proxy and really leverage its capabilities you can implement solutions that are by definition application-aware whilst maintaining platform-agnosticism. That’s important because the exploits you’re looking to stop are generally specific to an application; even when they target the platform they take advantage of application data manipulation and associated loopholes in processing of that data. While the target may be the platform, the miscreant takes aim and transports the payload via, well, the payload. The data. That data is, of course, often carried in the query portion of the URI as a parameter. If it’s not in the URI then it’s in the payload, often as a www-url-encoded field submitted via HTTP POST method. The script can extract the URI and validate its total length and/or it can extract each individual name-value pairs (in the URI or in the body) and evaluate each one of them for length, doing whatever it is you want done with invalid length values: reject the request, chop the value to a specific size and pass it on, log it or even route the request to an application honey-pot. Example iRule snippet to check length of URI on submission: if {[string length $uri] > 1024} /* where $uri = HTTP::uri */ If you’re thinking that it’s going to be time consuming to map all the possible variables to maximum lengths, well, you’re right. It will. You can of course write a generic across-the-board length-limiting script or you could investigate a web application firewall instead. A WAF will “learn” the mapping in real-time and allow you to fine-tune or relax limitations on a parameter by parameter basis if you wish, with a lot less time investment. Both options, however, will do the job nicely and both provide a secondary line of defense in the face of a potential exploit that is completely avoidable if you’ve got the right tools in your security toolbox. Related blogs and articles:261Views0likes0CommentsI Can Has UR .htaccess File
Notice that isn’t a question, it’s a statement of fact Twitter is having a bad month. After it was blamed, albeit incorrectly, for a breach leading to the disclosure of both personal and corporate information via Google’s GMail and Apps, its apparent willingness to allow anyone and everyone access to a .htaccess file ostensibly protecting search.twitter.com made the rounds via, ironically, Twitter. This vulnerability at first glance appears fairly innocuous, until you realize just how much information can be placed in an .htaccess file that could have been exposed by this technical configuration faux pas. Included in the .htaccess file is a number of URI rewrites, which give an interesting view of the underlying file system hierarchy Twitter is using, as well as a (rather) lengthy list of IP addresses denied access. All in all, not that exciting, because many of the juicy bits that could be configured via .htaccess for any given website are not done so in this easily accessible .htaccess file. Some things you can do with .htaccess, in case you aren’t familiar: Create default error document Enable SSI via htaccess Deny users by IP Change your default directory page Redirects Prevent hotlinking of your images Prevent directory listing .htaccess is a very versatile little file, capable of handling all sorts of security and application delivery tasks. Now what’s interesting is that the .htaccess file is in the root directory and should not be accessible. Apache configuration files are fairly straight forward, and there are plethora examples of how to prevent .htaccess – and its wealth of information – from being viewed by clients. Obfuscation, of course, is one possibility, as Apache’s httpd.conf allows you to specify the name of the access file with a simple directive: AccessFileName .htaccess It is a simple enough thing to change the name of the file, thus making it more difficult for automated scans to discover vulnerable access files and retrieve them. A little addition to the httpd.conf regarding the accessibility of such files, too, will prevent curious folks from poking at .htaccess and retrieving them with ease. After all, there is no reason for an access file to be viewed by a client; it’s a server-side security configuration mechanism, meant only for the web server, and should not be exposed given the potential for leaking a lot of information that could lead to a more serious breach in security. ~ "^\.ht"> Order allow,deny Deny from all Satisfy All Another option, if you have an intermediary enabled with network-side scripting, is to prevent access to any .htaccess file across your entire infrastructure. Changes to httpd.conf must be done on every server, so if you have a lot of servers to manage and protect it’s quite possible you’d miss one due to the sheer volume of servers to slog through. Using a network-side scripting solution eliminates that possibility because it’s one change that can immediately affect all servers. Here’s an example using an iRule, but you should also be able to use mod_rewrite to accomplish the same thing if you’re using an Apache-based proxy: when HTTP_REQUEST { # Check the requested URI switch -glob [string tolower [HTTP::path]] { "/.ht*" { reject } default { pool bigwebpool } } } However you choose to protect that .htaccess file, just do it. This isn’t rocket science, it’s a straight-up simple configuration error that could potentially lead to more serious breaches in security – especially if your .htaccess file contains more sensitive (and informative) information. An Unhackable Server is Still Vulnerable Twittergate Reveals E-Mail is Bigger Security Risk than Twitter Automatically Removing Cookies Clickjacking Protection Using X-FRAME-OPTIONS Available for Firefox Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting712Views0likes4CommentsUsing Resource Obfuscation to Reduce Risk of Mass SQL Injection
One of the ways miscreants locate targets for mass SQL injection attacks that can leave your applications and data tainted with malware and malicious scripts is to simply seek out sites based on file extensions. Attackers know that .ASP and .PHP files are more often than not vulnerable to SQL injection attacks, and thus use Google and other search engines to seek out these target-rich environments by extension. Using a non-standard extension will not eliminate the risk of being targeted by a mass SQL injection attack, but it can significantly reduce the possibility because your site will automatically turn up in cursory searches seeking vulnerable sites. As Jeremiah Grossman often points out, while cross-site scripting may be the most common vulnerability discovered in most sites, SQL injection is generally the most exploited vulnerability, probably due to the ease with which it can be discovered, so anything you can do to reduce that possibility is a step in the right direction. You could, of course, embark on a tedious and time-consuming mission to rename all files such that they do not show up in a generic search. However, this requires much more than simply replacing file extensions as every reference to the files must also necessarily be adjusted lest you completely break your application. You may also be able to automatically handle the substitution and required mapping in the application server itself by modifying its configuration. Alternatively there is another option: resource obfuscation. Using a network-side scripting technology like iRules or mod_rewrite, you have a great option at your disposal to thwart the automated discovery of potentially vulnerable applications. HIDE FILE EXTENSIONS You can implement network-side script functionality that simply presents to the outside world a different extension for all PHP and ASP files. While internally you are still serving up application.php the user – whether search engine, spider, or legitimate user – sees application.zzz. The network-side script must be capable of replacing all instances of “.php” with “.zzz” in responses while interpreting all requests for “.zzz” as “.php” in order to ensure that the application continues to act properly. The following iRule shows an example of both the substitution in the response and the replacement in the request to enable this functionality: when HTTP_REQUEST { # This replaces “.zzz” with ".php” in the URI HTTP::uri [string map {".zzz" ".php"} [HTTP::uri]] } when HTTP_RESPONSE { STREAM::disable If {[HTTP::header value "Content-Type"] contains "text" } { STREAM::expression "@.php@.zzz@" STREAM::enable } } One of the benefits of using a network-side script like this one to implement resource obfuscation is that in the event that the bad guys figure out what you’re doing, you can always change the mapping in a centralized location and it will immediately propagate across all your applications – without needing to change a thing on your servers or in your application. HIDE YOUR SERVER INFORMATION A second use of resource obfuscation is to hide the server information. Rather than let the world know you’re running on IIS or Apache version whatever with X and Y module extensions, consider changing the configuration to provide minimal – if any – information about the actual application infrastructure environment. For Apache you can change this in httpd.conf: ServerSignature Off ServerTokens Prod These settings prevent Apache from adding the “signature” at the bottom of pages that contains the server name and version information and changes the HTTP Server header to simply read “Apache”. In IIS you can disable the Server header completely by setting the following registry key to “1”. HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\DisableServerHeader If you’d rather change the IIS Server header instead of removing it, this KnowledgeBase Note describes how to use URLScan to achieve your goals. If you’d like to change the HTTP Server header in a centralized location you can use mod_security or network-side scripting to manipulate the Server header. As with masking file extensions, a centralized location for managing the HTTP Server header can be beneficial in many ways, especially if there are a large number of servers on which you need to make configuration changes. Using iRules, just replace the header with something else: when HTTP_RESPONSE { HTTP::header replace Server new_value } Using mod_security you can set the SecServerSignature directive: SecServerSignature "My Custom Server Name" These techniques will not prevent your applications from being exploited nor do they provide any real security against an attack, but they can reduce the risk of being discovered and subsequently targeted by making it more difficult for miscreants to recognize your environment as one that may be vulnerable to attack.279Views0likes1Comment