f5 friday
127 TopicsF5 Friday: Load Balancing MySQL with F5 BIG-IP
Scaling MySQL just got a whole lot easier load balancing MySQL – any database, really – is not a trivial task. Generally speaking one does not simply round robin your way through a cluster of MySQL databases as a means to achieve scalability. It is databases, in fact, that have driven a wide variety of scalability patterns such as sharding and partitioning to achieve the ultimate goal of high-performance and scalability simultaneously. Unfortunately, most folks don’t architect their applications with scalability in mind. A single database is all that’s necessary at first, and because of the way in which the application interacts with the database, it doesn’t make sense to code in support for multiple database instances, such as is often implemented with a MySQL master-slave cluster. That’s because the application has to actually open a connection to the database in question. If you’re only starting with one database, you really can’t code in a connection to a separate instance. Eventually that application’s usage grows and the demands upon the database require a more scalable approach. Enter the MySQL master/slave relationship. A typical configuration is to maintain the master as the “write” database, i.e. all updates and/or inserts must use the master, while the slave instance is used as a “read only” instance. Obviously this means the application code must be changed to support this kind of functional sharding. Unless you leverage network server virtualization from a load balancing service capable of acting as a full-proxy at layer 7 (application) like BIG-IP. This solution leverages iRules to implement database load balancing. While this specific example is designed to perform the common functional sharding pattern of read-write separation for a master-slave MySQL cluster, the flexibility of iRules is such that other architectural solutions can easily be designed using the same basic functions. Location based sharding is another popular means of scaling databases, and using the GeoLocation capabilities of BIG-IP along with iRules to inspect and route database requests, it should be a fairly trivial architectural task to implement. The ability to further extend sharding or other distribution methodologies for scaling databases without modifying the application itself is a huge bonus for both developers and operations. By decoupling the application from the database, it provides a more flexibility set of scalability domains in which technology targeted scalability strategies can be leveraged independent of the other layers. This is an important facet of agile infrastructure architecture and should not be underestimated as a benefit of network server virtualization. MySQL Load Balancing Resources: MySQL Proxy iRule MySQL Proxy iApp (deployment package for BIG-IP v11) The Full-Proxy Data Center Architecture Infrastructure Scalability Pattern: Sharding Streams Infrastructure Scalability Pattern: Sharding Sessions Infrastructure Scalability Pattern: Partition by Function or Type IT as a Service: A Stateless Infrastructure Architecture Model F5 Friday: Platform versus Product At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? All F5 Friday Posts on DevCentral Why Single-Stack Infrastructure Sucks2.6KViews0likes0CommentsF5 Friday: How to Create Your Own URL Shortener
Network-side scripting and really big, really fast tables let you implement your own (controllable) URL shortening service We all use URL shorteners to share links, especially via Twitter and other space-constrained communications channels. At the same time, we’re leery of clicking on a short URL that comes from someone we don’t know well enough to trust implicitly. And unless the service you’re using to exchange thoughts automatically applies a URL shortening service to any links contained within your message, you’re likely creating those short URLs by hand. We love to hate them and we hate to love them. But it is what it is, and what it is is both useful and somewhat risky. Basically there’s three core issues with leveraging URL shortening services: Unless you’ve got a developer on hand (and even sometimes if you do) external URL shortening services require manual creation Most services don’t allow “custom” domains, i.e. allow you to use your domain and simply shorten the URI. Those that do (bit.ly for example) require changes to your infrastructure (specifically DNS entries) Shortened URLs shared via traditional services are often suspect because these services have been used to “hide” the destination. The malicious use of short URLs engenders suspicion with many and a refusal to investigate on the off-chance the destination is a malware laden site or something NSFW (Not Safe For Work). And yet sharing URLs becomes increasingly tedious the longer the URL is. Really, just because you can use several thousand characters doesn’t mean you should. Thus URL shorteners, despite their shortcomings, have become the method du jour for turning long URLs into easily consumed, sharable tidbits. We hate to love them, we love to hate them. We’re addicted to short URLs. To address the shortcomings, wouldn’t it be nice if you could maintain your own domain and still shorten those URLs? And wouldn’t it be even nicer if that meant you could actually gather usage statistics about that URL? While bit.ly’s “pro” service allows the former, it’s still amazingly naive immature in the reporting department, and it’s nigh-unto-impossible to extract that data any way but manually. Finally, wouldn’t it be nice if you could integrate the shortening process in a dynamic way rather than always creating them manually? Have I got a deal for you… iRULE CUSTOM URL SHORTENER I talk a lot about network-side scripting as an agile method of well, manipulating application requests and data on-demand. From inbound inspection to outbound rewriting, network-side scripting is the realization of one of the foundational dynamic datacenter components: dynamic infrastructure. Providing real-time interaction with requests and responses traversing an intelligent intermediary means devops, infosec, developers, and network teams have the tools with which they can address a variety of obstacles and pain-points. In this case, it’s adding business value and increasing visibility; maintaining control and ensuring the integrity of links shared for whatever the reason. It also allows the ability to better discern from where and whom links are being picked up. It’s real-time campaign tracking. The core value here though is two-fold: (1) you maintain control and (2) you use your own domain to provide some measure of integrity assurance to those you’re sharing the links with. The secondary and tertiary benefits are in having a way to track business and marketing campaigns. An immediate question should be (it was for me) “what about performance?” Just how large can a table containing a mapping of short URIs to long URIs get before it starts to impede performance? This is essentially a proxy solution, so every microsecond it takes to look up the short URI and replace it with a long URI adds to the response time of a request. Well, the bonus if you’re using BIG-IP LTM and an iRule is that the functionality is taking advantage of the core platform session table which, if you know a thing or two about networking, absolutely must be high-speed, high-performance in its ability to perform lookups because it can grow to billions of entries in high-traffic situations. So the answer from the experts to my question was, “Giant. Huge. Ginormous.” The second bonus is that you don’t necessarily have to do a redirect, which adds to the overall response time. With out-of-band URL shortening services the request goes to a third-party proxy, is translated, and a redirect to the original is returned to the user. Then the user’s browser automatically makes a second request and gets the content they wanted. With an integrated, full-proxy iRule-based solution the redirect isn’t strictly necessary. While you can still use that same method, it would be much more efficient to simply look up the short URI, grab the full URI, and then simply replace the requested URI with the real one and send it on to the server. You’re eliminating time on the wire between the third-party service and the user completely, and the associated TCP-session setup/teardown time which we know is rather expensive in terms of time and resources. You can still do a redirect if you want to, but it’s completely unnecessary unless, of course, you’re planning on offering the capability as an out-of-band service to your customers. So by using an iRule you can improve performance, increase visibility, and provide some measure of integrity assurance while you’re out there sharing links with whomever you’re sharing them with. Additionally, it’s just a darn cool use of iRules that has a lot of potential to be modified and used for other situations in which URI mapping might be useful. And of course it happens to be the case that DevCentral’s newest cohort, George Watkins, has written up an iRule to handle URI shortening. iRule wizard Colin Walker helped optimize the rule, so it ought to be a very efficient little iRule. Go ahead and give George’s URI shortening iRule a look-see and try it out. If you don’t have a BIG-IP yourself, then go ahead and get one – iRules are a part of the core TMOS platform upon which BIG-IP products and modules are based, so the VE (Virtual Edition) of BIG-IP LTM has everything you need to deploy the iRule and take it for a spin. NOTE: George’s version of the iRule is based on an out-of-band service model. Using HTTP::uri instead of HTTP::redirect for the URL will change the behavior and eliminate the overhead of the redirect, but don’t forget to assign the iRule to the appropriate VIP. It is also a manual create, but there’s no reason you could not integrate the iRule functionality into the response processing and rewrite all URIs in a page to be small URLs automatically – or just any URL with a length greater than . Happy coding! Related Posts All F5 Friday Entries on DevCentral All About iRules from tag iRules F5 Friday: Eavesdropping on Availability Defeating Attacks Easier Than Detecting Them F5 Friday: An On-Demand Turing Test Out, Damn’d Bot! Out, I Say! F5 Friday: A Network Heatwave That’s Good For Operations No Shirt, No Shoes, No HTTP Service Is Vendor Lock-In Really a Bad Thing? AJAX and Network-Side Scripting Automatically Removing Cookies from tag performance The Great Client-Server Architecture Myth IE8: Robbing Peter to pay Paul Your Network is Not My Network (more..) del.icio.us Tags: MacVittie,F5,F5 Friday,George Watkins,Colin Walker,iRules,URL shortener,bit.ly,performance1.5KViews0likes0CommentsF5 Friday: The 2048-bit Keys to the Kingdom
There’s a rarely mentioned move from 1024-bit to 2048-bit key lengths in the security demesne … are you ready? More importantly, are your infrastructure and applications ready? Everyone has likely read about DNSSEC and the exciting day on which the root servers were signed. In response to security concerns – and very valid ones at that – around the veracity of responses returned by DNS, which underpins the entire Internet, the practice of signing responses was introduced. Everyone who had anything to do with encryption and certificates said something about the initiative. But less mentioned was a move to leverage longer RSA key lengths as a means to increase the security of the encryption of data, a la SSL (Secure Socket Layer). While there have been a few stories on SSL vulnerabilities – Dan Kaminsky illustrated flaws in the system at Black Hat last year – there’s been very little public discussion about the transition in key sizes across the industry. The last time we had such a massive move in the cryptography space was back when we moved from 128-bit to 256-bit keys. Some folks may remember that many early adopters of the Internet had issues with browser support back then, and the impact on the performance and capacity of infrastructure were very negatively impacted. Well, that’s about to happen again as we move from 1024-bit keys to 2048-bit keys – and the recommended transition deadline is fast approaching. In fact, NIST is recommending the transition by January 1st, 2011 and several key providers of certificates are already restricting the issuance of certificates to 2048-bit keys. NIST Recommends transition to 2048-bit key lengths by Jan 1st 2011: Special Publication 800-57 Part 1 Table 4 VeriSign Started focusing on 2048-bit keys in 2006; complete transition by October 2010. Indicates their transition is to comply with best practices as recommended by NIST GeoTrust Clearly indicates why it transitioned to only 2048-bit Keys in June 2010 Entrust Also following NIST recommendations : TN 7710 - Entrust is moving to 2048-bit RSA keys. GoDaddy "We enforced a new policy where all newly issued and renewed certificates must be 2048-bit“. Extended Validation (EV) required 2048-bit keys on 1/1/09 Note that it isn’t just providers who are making this move. Microsoft uses and recommends 2048-bit keys per the NIST guidelines for all servers and other products. Red Hat recommends 2048+ length for keys using RSA algorithm. And as of December 31, 2013 Mozilla will disable or remove all root certificates with RSA key sizes smaller than 2048 bits. That means sites that have not made the move as of that date will find it difficult for customers and visitors to hook up, as it were. THE IMPACT on YOU The impact on organizations that take advantage of encryption and decryption to secure web sites, sign code, and authenticate access is primarily in performance and capacity. The decrease in performance as key sizes increase is not linear, but more on the lines of exponential. For example, though the key size is shifting by a factor of two, F5 internal testing indicates that such a shift results in approximately a 5x reduction in performance (as measured by TPS – Transactions per Second). This reduction in performance has also been seen by others in the space, as indicated by a recent Citrix announcement of a 5x increase in performance of its cryptographic processing. This decrease in TPS is due primarily to heavy use of the key during the handshaking process. The impact on you is heavily dependent on how much of your infrastructure leverages SSL. For some organizations – those that require SSL end-to-end – the impact will be much higher. Any infrastructure component that terminated SSL and re-encrypted the data as a means to provide inline functionality (think IDS, Load balancer, web application firewall, anti-virus scan) will need to also support 2048-bit keys, and if new certificates are necessary these, too, will need to be deployed throughout the infrastructure. Any organization with additional security/encryption requirements over and above simply SSL encryption, such as FIPS 140-2 or higher, are looking at new/additional hardware to support the migration. Note: There are architectural solutions to avoid the type of forklift upgrade necessary, we’ll get to that shortly. If your infrastructure is currently supporting SSL encryption/decryption on your web/application servers, you’ll certainly want to start investigating the impact on capacity and performance now. SSL with 1024-bit keys typically requires about 30% of a server’s resources (RAM, CPU) and the increase to 2048-bit keys will require more, which necessarily comes from the resources used by the application. That means a decrease in capacity of applications running on servers on which SSL is terminated and typically a degradation in performance. In general, the decrease we’ve (and others) have seen in TPS performance on hardware should give you a good idea of what to expect on software or virtual network appliances. As a general rule you should determine what level of SSL transaction you are currently licensed for and divide that number by five to determine whether you can maintain the capacity you have today after a migration to 2048-bit keys. It may not be a pretty picture. ADVANTAGES of SSL OFFLOAD If the advantages of offloading SSL to an external infrastructure component were significant before the move from 1024-bit keys to 2048-bit keys makes them nearly indispensable to maintaining performance and capacity of existing applications and infrastructure. Offloading SSL to an external infrastructure component enabled with specialized hardware further improves the capacity and performance of these mathematically complex and compute intensive processes. ARCHITECTURAL SOLUTION to support 1024-bit key only applications If you were thinking about leveraging a virtual network appliance for this purpose, you might want to think about that one again. Early testing of RSA operations using 2048-bit keys on 64-bit commodity hardware shows a capacity in the hundreds of transactions per second. Not tens of thousands, not even thousands, but hundreds. Even if the only use of SSL in your organization is to provide secure web-based access to e-mail, a la Microsoft Web Outlook, this is likely unacceptable. Remember there is rarely a 1:1 relationship between connections and web applications today, and each connection requires the use of those SSL operations, which can drastically impact the capacity in terms of user concurrency. Perhaps as important is the ability to architect around limitations imposed by applications on the security infrastructure. For example, many legacy applications (Lotus Notes, IIS 5.0) do not support 2048-bit keys. Thus meeting the recommendation to migrate to 2048-bit keys is all but impossible for this class of application. Leveraging the capabilities of an application delivery controller that can support 2048-bit keys, however, allows for the continued support of 1024-bit keys to the application while supporting 2048-bit keys to the client. ARE YOU READY? That’s a question only you can answer, and you can only answer that by taking a good look at your infrastructure and applications. Now is a good time to evaluate your SSL strategy to ensure it’s up to the challenge of 2048-bit keys. Check your licenses, determine your current capacity and requirements, and compare those to what can be realistically expected once the migration is complete. Validate that applications currently requiring 1024-bit keys can support 2048-bit keys or whether such a migration is contraindicated by the application, and investigate whether a proxy-based (mediation) solution might be appropriate. And don’t forget to determine whether or not compliance with regulations may require new hardware solutions. Now this is an F5 Friday post, so you knew there had to be some tie-in, right? Other than the fact that the red-ball glowing ball on every BIG-IP just looks hawesome in the dim light of a data center, F5 solutions can mitigate many potential negative impacts resulting from a migration of 1024-bit to 2048-bit key lengths: BIG-IP Specialized Hardware BIG-IP hardware platforms include specialized RSA acceleration hardware that improves the performance of the RSA operations necessary to support encryption/decryption and SSL communication and enables higher capacities of the same. EM (Enterprise Manager) Streamlines Certificate Management F5’s centralized management solution, EM (Enterprise Manager), allows an organization to better manage a cryptographic infrastructure by providing the means to monitor and manage key expirations across all F5 solutions and collect TPS history and usage when sizing to better understand capacity constraints. BIG-IP Flexibility BIG-IP is a full proxy-based solution. It can mediate between clients and applications that have disparate requirements, such as may be the case with key sizes. This allows you to use 2048-bit keys but retain the use of 1024-bit keys to web/application servers and other infrastructure solutions. Strong partnerships and integration with leading centralized key management and crypto vendors that provide automated key migration and provisioning through open and standards-based APIs and robust scripting capabilities. DNSSEC Enhance security through DNSSEC to validate domain names. Although it has been suggested that 1024-bit keys might be sufficient for signing zones, with the forced migration to 2048-bit keys there will be increased pressure on the DNS infrastructure that may require a new solution for your DNS systems. THIS IS IN MANY REGARDS INFOSEC’S “Y2K” In many ways a change of this magnitude is for Information Security professionals their “Y2K” because such a migration will have an impact on nearly every component and application in the data center. Unfortunately for the security folks, we had a lot more time to prepare for Y2K…so get started, go through the checklist, and get yourself ready to make the switch now before the eleventh hour is upon us. Related blogs & articles: The Anatomy of an SSL Handshake [Network Computing] DNSSEC Readiness [ISC.org] Get Ready for the Impact of 2048-bit RSA Keys [Network Computing] SSL handshake latency and HTTPS optimizations [semicomplete.com] Pete Silva Demonstrates the FirePass SSL-VPN Data Center Feng Shui: SSL WILS: SSL TPS versus HTTP TPS over SSL SSL performance - DevCentral - F5 DevCentral > Community > Group ... DevCentral Weekly Roundup | Audio Podcast - SSL iControl Apps - #12 - Global SSL Statistics > DevCentral > F5 ... Oracle 10g SSL Offload - JInitiator:X509CertChainInvalidErr error ... Requiring an SSL Certificate for Parts of an Application ... The Order of (Network) Operations1.2KViews0likes4CommentsF5 Friday: Gracefully Scaling Down
What goes up, must come down. The question is how much it hurts (the user). An oft ignored side of elasticity is scaling down. Everyone associates scaling out/up with elasticity of cloud computing but the other side of the coin is just as important, maybe more so. After all, what goes up must come down. The trick is to scale down gracefully, i.e. to do it in such a way as to prevent the disruption of service to existing users while simultaneously trying to scale back down after a spike in demand. The ramifications of not scaling down are real in terms of utilization and therefore cost. Scaling up with the means to scale back down means higher costs, and simply shutting down an instance that is currently in use can result in angry users as service is disrupted. What’s necessary is to be able to gracefully scale down; to indicate somehow to the load balancing solution that a particular instance is no longer necessary and begin preparation for eventually shutting it down. Doing so gracefully requires that you are somehow able to quiesce or bleed off the connections. You want to continue to service those users who are currently connected to the instance while not accepting any new connections. This is one of the benefits of leveraging an application-aware application delivery controller versus a simple Load balancer: the ability to receive instruction in-process to begin preparation for shut down without interrupting existing connections. SERVING UP ACTIONABLE DATA BIG-IP users have always had the ability to specify whether disabling a particular “node” or “member” results in the rejection of all connections (including existing ones) or if it results in refusing new connections while allowing old ones to continue to completion. The latter technique is often used in preparation for maintenance on a particular server for applications (and businesses) that are sensitive to downtime. This method maintains availability while accommodating necessary maintenance. In version 10.2 of the core BIG-IP platform a new option was introduced that more easily enables the process of draining a server/application’s connections in preparation for being taken offline. Whether the purpose is maintenance or simply the scaling down side of elastic scalability is really irrelevant; the process is much the same. Being able to direct a load balancing service in the way in which connections are handled from the application is an increasingly important capability, especially in a public cloud computing environment because you are unlikely to have the direct access to the load balancing system necessary to manually engage this process. By providing the means by which an application can not only report but direct the load balancing service, some measure of customer control over the deployment environment is re-established without introducing the complexity of requiring the provider to manage the thousands (or more) credentials that would otherwise be required to allow this level of control over the load balancer’s behavior. HOW IT WORKS For specific types of monitors in LTM (Local Traffic Manager) – HTTP, HTTPS, TCP, and UDP – there is a new option called “Receive Disable String.” This “string” is just that, a string that is found within the content returned from the application as a result of the health check. In phase one we have three instances of an application (physical or virtual, doesn’t matter) that are all active. They all have active connections and are all receiving new connections. In phase two a health check on one server returns a response that includes the string “DISABLE ME.” BIG-IP sees this and, because of its configuration, knows that this means the instance of the application needs to gracefully go offline. LTM therefore continues to direct existing connections (sessions) with that instance to the right application (phase 3), but subsequently directs all new connection requests to the other instances in the pool (farm, cluster). When there are no more existing connections the instance can be taken offline or shut down with zero impact to users. The combination of “receive string” and “receive disable string” impacts the way in which BIG-IP interprets the instruction. A “receive string” typically describes the content received that indicates an available and properly executing application. This can be as simple as “HTTP 200 OK” or as complex as looking for a specific string in the response. Similarly the “receive disable” string indicates a particular string of text that indicates a desire to disable the node and begin the process of bleeding off connections. This could be as simple as “DISABLE” as indicated in the above diagram or it could just as easily be based solely on HTTP status codes. If an application instance starts returning 50x errors because it’s at capacity, the load balancing policy might include a live disable of the instance to allow it time to cool down – maintaining existing connections while not allowing new ones. Because action is based on matching a specific string, the possibilities are pretty much wide open. The following table describes the possible interactions between the two receive string types: LEVERAGING as a PROVIDER One of the ways in which a provider could leverage this functionality to provide differentiated value-added cloud services (as Randy Bias calls them) would be to define an application health monitoring API of sorts that allows customers to add to their application a specific set of URIs that are used solely for monitoring and can thus control the behavior of the load balancer without requiring per-customer access to the infrastructure itself. That’s a win-win, by the way. The customer gets control but so does the provider. Consider an health monitoring API that is a single URI: http://$APPLICATION_INSTANCE_HOSTNAME/health/check. Now provide a set of three options for customers to return (these are likely oversimplified for illustration purposes, but not by much): ENABLE QUIESCE DISABLE For all application instances the BIG-IP will automatically use an HTTP-derived monitor that calls $APP_INSTANCE/health/check and examines the result. The monitor would use “ENABLE” as the “receive string” and “QUIESCE” as the “receive disable” string. Based on the string returned by the application, the BIG-IP takes the appropriate action (as defined by the table above). Of course this can also easily be accomplished by providing a button on the cloud management interface to do the same via iControl, but this option is more able to be programmatically defined by customers and thus is more dynamic and allows for automation. And of course such an implementation isn’t relegated only to service providers; IT organizations in any environment can take advantage of such an implementation, especially if they’re working toward an automated data center and/or self-service provisioning/management of IT services. That is infrastructure as a service. Yes, this means modification to the application being deployed. No, I don’t think that’s a problem – cloud and Infrastructure as a Service (IaaS), at least real IaaS is going to necessarily require modifications to existing applications and new applications will need to include this type of integration in the future if we are to take advantage of the benefits afforded by a more application aware infrastructure and, conversely, a more infrastructure-aware application architecture. Related Posts706Views0likes1CommentF5 Friday: It is now safe to enable File Upload
Web 2.0 is about sharing content – user generated content. How do you enable that kind of collaboration without opening yourself up to the risk of infection? Turns out developers and administrators have a couple options… The goal of many a miscreant is to get files onto your boxen. The second step after that is often remote execution or merely the hopes that someone else will look at/execute the file and spread chaos (and viruses) across your internal network. It’s a malicious intent, to be sure, and makes developing/deploying Web 2.0 applications a risky proposition. After all, Web 2.0 is about collaboration and sharing of content, and if you aren’t allowing the latter it’s hard to enable the former. Most developers know about and have used the ability to upload files of just about any type through a web form. Photos, documents, presentations – these types of content are almost always shared through an application that takes advantage of the ability to upload data via a simple web form. But if you allow users to share legitimate content, it’s a sure bet (more sure even than answering “yes” to the question “Will it rain in Seattle today?”) that miscreants will find and exploit the ability to share content. Needless to say information security professionals are therefore not particularly fond of this particular “feature” and in some organizations it is strictly verboten (that’s forbidden for you non-German speakers). So wouldn’t it be nice if developers could continue to leverage this nifty capability to enable collaboration? Well, all you really need to do is integrate with an anti-virus scanning solution and only accept that content which is deemed safe, right? After all, that’s good enough for e-mail systems and developers should be able to argue that the same should be good enough for web content, too. The bigger problem is in the integration. Luckily, ICAP (Internet Content Adaptation Protocol) is a fairly ready answer to that problem. SOLUTION: INTEGRATE ANTI-VIRUS SCANNING via ICAP The Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP based protocol specified in RFC 3507 designed to off-load specific content to dedicated servers, thereby freeing up resources and standardizing the way in which features are implemented. ICAP is generally used in proxy servers to integrate with third party products like antivirus software, malicious content scanners and URL filters. ICAP in its most basic form is a "lightweight" HTTP based remote procedure call protocol. In other words, ICAP allows its clients to pass HTTP based (HTML) messages (Content) to ICAP servers for adaptation. Adaptation refers to performing the particular value added service (content manipulation) for the associated client request/response. -- Wikipedia, ICAP Now obviously developers can directly take advantage of ICAP and integrate with an anti-virus scanning solution directly. All that’s required is to extract every file in a multi-part request and then send each of them to an AV-scanning service and determine based on the result whether to continue processing or toss those bits into /dev/null. This is assuming, of course, that it can be integrated: packaged applications may not offer the ability and even open-source which ostensibly does may be in a language or use frameworks that require skills the organization simply does not have. Or perhaps the cost over time of constantly modifying the application after every upgrade/patch is just not worth the effort. For applications for which you can add this integration, it should be fairly simple as developers are generally familiar with HTTP and RPC and understand how to use “services” in their applications. Of course this being an F5 Friday post, you can probably guess that I have an alternative (and of course more efficient) solution than integration into the code. An external solution that works for custom as well as packaged applications and requires a lot less long term maintenance – a WAF (Web Application Firewall). BETTER SOLUTION: web application firewall INTEGRATION The latest greatest version (v10.2) of F5 BIG-IP Application Security Manager (ASM) included a little touted feature that makes integration with an ICAP-enabled anti-virus scanning solution take approximately 15.7 seconds to configure (YMMV). Most of that time is likely logging in and navigating to the right place. The rest is typing the information required (server host name, IP address, and port number) and hitting “save”. F5 Application security manager (ASM) v10 includes easy integration with a/v solutions It really is that simple. The configuration is actually an HTTP “class”, which can be thought of as a classification of sorts. In most BIG-IP products a “class” defines a type of traffic closely based on a specific application protocol, like HTTP. It’s quite polymorphic in that defining a custom HTTP class inherits the behavior and attributes of the “parent” HTTP class and your configuration extends that behavior and attributes, and in some cases allows you to override default (parent) behavior. The ICAP integration is derived from an HTTP class, so it can be “assigned” to a virtual server, a URI, a cookie, etc… In most ASM configurations an HTTP class is assigned to a virtual server and therefore it sees all requests sent to that server. In such a configuration ASM sees all traffic and thus every file uploaded in a multipart payload and will automatically extract it and send it via ICAP to the designated anti-virus server where it is scanned. The action taken upon a positive result, i.e. the file contains bad juju, is configurable. ASM can block the request and present an informational page to the user while logging the discovery internally, externally or both. It can forward the request to the web/application server with the virus and log it as well, allowing the developer to determine how best to proceed. ASM can be configured to never allow requests to reach the web/application server that have not been scanned for viruses using the “Guarantee Enforcement” option. When configured, if the anti-virus server is unavailable or doesn’t respond, requests will be blocked. This allows administrators to configure a “fail closed” option that absolutely requires AV scanning before a request can be processed. A STRATEGIC POINT of CONTROL Leveraging a strategic point of control to provide AV scanning integration and apply security policies regarding the quality of content has several benefits over its application-modifying code-based integration cousin: Allows integration of AV scanning in applications for which it is not feasible to modify the application, for whatever reason (third-party, lack of skills, lack of time, long term maintenance after upgrades/patches ) Reduces the resource requirements of web/application servers by offloading the integration process and only forwarding valid uploads to the application. In a cloud-based or other pay-per-use model this reduces costs by eliminating the processing of invalid requests by the application. Aggregates logging/auditing and provides consistency of logs for compliance and reporting, especially to prove “due diligence” in preventing infection. Related Posts All F5 Friday Entries on DevCentral All About ASM632Views0likes4CommentsF5 Friday: Infoblox and F5 Do DNS and Global Load Balancing Right.
#F5Friday #F5 Infoblox and F5 improve resilience, compliance, and security for global load balancing. If you’re a large corporation, two things that are a significant challenge for your Network Administrators’ are DNS management and Global Load Balancing (GLB) configuration/management. With systems spread across a region, country, or the globe, the amount of time investment required to keep things running smoothly ranges from “near zero” during quiet times to “why am I still here at midnight?” in times of major network change or outages. Until now. Two market leaders – Infoblox and F5 Networks have teamed up to make DNS – including DNSSEC – and GLB less time-consuming and error prone. Infoblox has extended their Trinzic DDI family of products with Infoblox Load Balancer Manager (LBM) for F5 Global Traffic Manager (GTM). The LBM turns a loose collection of load balancers into a dynamic, automated, Infoblox Grid™. What does all that rambling and all those acronyms boil down to? Here’s the bullet list, followed with more detail: Centralized Management of DNS and global load balancing services. Application of Infoblox Security Framework across F5 GTM devices. Automation of best practices. Allow administrators to delegate responsibility for small subsets of the network to responsible individuals. Enables rapid identification of network problems. Track changes to load balancing configurations for auditing and compliance. While F5 GTM brings DNS delivery services, global load balancing, workload management, disaster recovery, and application management to the enterprise, Infoblox LBM places a management layer over both global DNS and global load balancing, making them more manageable, less error prone, and more closely aligned to your organizational structure. LBM is a module available on Infoblox DDI Grid devices and VMs, and GTM is delivered either as a product module on BIG-IP or as a VM. With unified management, Infoblox LBM shows at-a-glance what is going on in the network: Since Infoblox DDI and F5 BIG-IP GTM both interface to multiple Authentication, Authorization, and Access Control (AAA) systems, Infoblox LBM allows unified security management with groups and users, and further allows control of a given set of objects (say all hardware in the San Francisco datacenter) to local administrators without having to expose the entire infrastructure to those users. For best practices, LBM implements single-click testing of connections to BIG-IP GTM devices, synchronization of settings across BIG-IP GTM instances for consistency, and auto discovery of settings, including protocol, DNS profiles, pools, virtual IPs, servers, and domains being load balanced. In short, LBM gives a solid view of what is happening inside your BIG-IP GTM devices and presents all appliances in a unified user interface. If you use BIG-IP iControl, then you will also be pleased that Infoblox LBM regularly checks the certificates used to secure iControl communications and validates that they are not rejected or expired. For more information about this solution, see the solution page. Previous F5 Fridays F5 Friday: Speed Matters F5 Friday: No DNS? No … Anything. F5 Friday: Zero-Day Apache Exploit? Zero-Problem F5 Friday: What's Inside an F5? F5 Friday: Programmability and Infrastructure as Code F5 Friday: Enhancing FlexPod with F5 F5 Friday: Microsoft and F5 Lync Up on Unified Communications599Views0likes0CommentsF5 Friday: Goodbye Defense in Depth. Hello Defense in Breadth.
#adcfw #infosec F5 is changing the game on security by unifying it at the application and service delivery layer. Over the past few years we’ve seen firewalls fail repeatedly. We’ve seen business disrupted, security thwarted, and reputations damaged by the failure of the very devices meant to prevent such catastrophes from happening. These failures have been caused by a change in tactics from invaders who seek no longer to find away through or over the walls, but who simply batter it down instead. A combination of traditional attacks – network-layer – and modern attacks – application-layer – have become a force to be reckoned with; one that traditional stateful firewalls are often not equipped to handle. Encrypted traffic flowing into and out of the data center often bypasses security solutions entirely, leaving another potential source of a breach unaddressed. And performance is being impeded by the increasing number of devices that must “crack the packet” as it were and examine it, often times duplicating functionality with varying degrees of success. This is problematic because the resolution to this issue can be as disconcerting as the problem itself: disable security. Seriously. Security functions have been disabled, intentionally, in the name of performance. IT security personnel within large corporations are shutting off critical functionality in security applications to meet network performance demands for business applications. SURVEY: SECURITY SACRIFICED FOR NETWORK PERFORMANCE What the company [NSS Labs] found would likely startle any existing or potential customers: three of the six firewalls failed to stay operational when subjected to stability tests, five out of six didn't handle what is known as the "Sneak ACK attack," that would enable attackers to side-step the firewall itself. Finally, according to NSS Labs, the performance claims presented in the vendor datasheets "are generally grossly overstated." Independent lab tests find firewalls fall down on the job Add in the complexity from the sheer number of devices required to implement all the different layers of security needed, which increases costs while impairing performance, and you’ve got a broken model in need of repair. This is a failure of the defense in depth strategy; the layered, multi-device (silo) approach to operational security. Most importantly, it’s one that’s failing to withstand attacks. What we need is defense in breadth – the height of the stack –to assure availability and security using a more intelligent, unified security strategy. DEFENSE in BREADTH While it’s really not as catchy as “defense in the depth” the concept behind the admittedly awkward sounding phrase is sound: to assure availability and security simultaneously requires a strong security strategy from the bottom to the top of the networking stack, i.e. the application layer. The ability of the F5 BIG-IP platform to provide security up and down the stack has existed for many years, and its capabilities to detect, prevent, and withstand concerted attacks has been appreciated by its customers (quietly) for some time. While basic firewalling functions have been a part of BIG-IP for years, there are certain capabilities required of a firewall – specifically an ICSA certified firewall – that it didn’t have. So we decided to do something about that. The result is the ICSA certification of the BIG-IP platform as a network firewall. Combined with its existing ICSA certification for web application firewall (BIG-IP Application Security Manager) and SSL-TLS VPN 3.0 (BIG-IP Edge Gateway), the BIG-IP platform now supports a full-spectrum security solution in a single, unified system. What is unique about F5’s approach is that the security capabilities noted above can be deployed on BIG-IP Application Delivery Controllers (ADCs)—best known for providing industry-leading intelligent traffic management and optimization capabilities. This firewall solution is part of F5’s comprehensive security architecture that enables customers to apply a unified security strategy. For the first time in the industry, organizations can secure their networks, data, protocols, applications, and users on a single, flexible, and extensible platform: BIG-IP. Combining network-firewall services with the ability to plug the hole in modern security implementations (the application layer) with a platform-based solution provides the opportunity to consolidate security services and leverage a shared infrastructure platform resulting in a more comprehensive, strategic deployment that is not only more secure, but more cost effective. Resources: The Fundamental Problem with Traditional Inbound Protection The Ascendancy of the Application Layer Threat ISCA Certified Network Firewall for Data Centers Mature Security Organizations Align Security with Service Delivery BIG-IP Data Center Firewall Solution – SlideShare Presentation The New Data Center Firewall Paradigm – White Paper Independent lab tests find firewalls fall down on the job SURVEY: SECURITY SACRIFICED FOR NETWORK PERFORMANCE F5 Friday: When Firewalls Fail… Challenging the Firewall Data Center Dogma What We Learned from Anonymous: DDoS is now 3DoS The Many Faces of DDoS: Variations on a Theme or Two F5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy F5 Friday: Multi-Layer Security for Multi-Layer Attacks523Views0likes0CommentsF5 Friday: Performance, Throughput and DPS
No, not World of Warcraft “Damage per Second” - infrastructure “Decisions per second”. Metrics are tricky. Period. Comparing metrics is even trickier. The purpose of performance metrics is, of course, to measure performance. But like most tests, before you can administer such a test you really need to know what it is you’re testing. Saying “performance” isn’t enough and never has been, as the term has a wide variety of meanings that are highly dependent on a number of factors. The problem with measuring infrastructure performance today – and this will continue to be a major obstacle in metrics-based comparisons of cloud computing infrastructure services – is that we’re still relying on fairly simple measurements as a means to determine performance. We still focus on speeds and feeds, on wires and protocols processing. We look at throughput, packets per second (PPS) and connections per second (CPS) for network and transport layer protocols. While these are generally accurate for what they’re measuring, we start running into real problems when we evaluate the performance of any component – infrastructure or application – in which processing, i.e. decision making, must occur. Consider the difference in performance metrics between a simple HTTP request / response in which the request is nothing more than a GET request paired with a 0-byte payload response and an HTTP POST request filled with data that requires processing not only on the application server, but on the database, and the serialization of a JSON response. The metrics that describe the performance of these two requests will almost certainly show that the former has a higher capacity and faster response time than the latter. Obviously those who wish to portray a high-performance solution are going to leverage the former test, knowing full well that those metrics are “best case” and will almost never be seen in a real environment because a real environment must perform processing, as per the latter test. Suggestions that a standardized testing environment, similar to application performance comparisons using the Pet Shop Application, are generally met with a frown because using a standardized application to induce real processing delays doesn’t actually test the infrastructure component’s processing capabilities, it merely adds latency on the back-end and stresses capacity of the infrastructure component. Too, such a yardstick would fail to really test what’s important – the speed and capacity of an infrastructure component to perform processing itself, to make decisions and apply them on the component – whether it be security or application routing or transformational in nature. It’s an accepted fact that processing of any kind, at any point along the application delivery service chain induces latency which impacts capacity. Performance numbers used in comparisons should reveal the capacity of a system including that processing impact. Complicating the matter is the fact that since there are no accepted standards for performance measurement, different vendors can use the same term to discuss metrics measured in totally different ways. THROUGHPUT versus PERFORMANCE Infrastructure components, especially those that operate at the higher layers of the networking stack, make decisions all the time. A firewall service may make a fairly simple decision: is this request for this port on this IP address allowed or denied at this time? An identity and access management solution must make similar decisions, taking into account other factors, answering the question is this user coming from this location on this device allowed to access this resource at this time? Application delivery controllers, a.k.a. load balancers, must also make decisions: which instance has the appropriate resources to respond to this user and this particular request within specified performance parameters at this time? We’re not just passing packets anymore, and therefore performance tests that measure only the surface ability to pass packets or open and close connections is simply not enough. Infrastructure today is making decisions and because those decisions often require interception, inspecting and processing of application data – not just individual packets – it becomes more important to compare solutions from the perspective of decisions per second rather than surface-layer protocol per second measurements. Decision-based performance metrics are a more accurate gauge as to how the solution will perform in a “real” environment, to be sure, as it’s portraying the component’s ability to do what it was intended to do: make decisions and perform processing on data. Layer 4 or HTTP throughput metrics seldom come close to representing the performance impact that normal processing will have on a system, and, while important, should only be used with caution when considering performance. Consider the metrics presented by Zeus Technologies in a recent performance test (Zeus Traffic Manager - VMware vSphere 4 Performance on Cisco UCS – 2010 and F5’s performance results from 2010 (F5 2010 Performance Report) While showing impressive throughput in both cases, it also shows the performance impact that occurs when additional processing – decisions – are added into the mix. The ability of any infrastructure component to pass packets or manage connections (TCP capacity) is all well and good, but these metrics are always negatively impacted once the component begins actually doing something, i.e. making decisions. Being able to handle almost 20 Gbps throughput is great but if that measurement wasn’t taken while decisions were being made at the same time, your mileage is not just likely to vary – it will vary wildly. Throughput is important, don’t get me wrong. It’s part of – or should be part of – the equation used to determine what solution will best fit the business and operational needs of the organization. But it’s only part of the equation, and probably a minor part of that decision at that. Decision based metrics should also be one of the primary means of evaluating the performance of an infrastructure component today. “High performance” cannot be measured effectively based on merely passing packets or making connections – high performance means being able to push packets, manage connections and make decisions, all at the same time. This is increasingly a fact of data center life as infrastructure components continue to become more “intelligent”, as they become a first class citizen in the enterprise infrastructure architecture and are more integrated and relied upon to assist in providing the services required to support today’s highly motile data center models. Evaluating a simple load balancing service based on its ability to move HTTP packets from one interface to the other with no inspection or processing is nice, but if you’re ultimately planning on using it to support persistence-based routing, a.k.a. sticky sessions, then the rate at which the service executes the decisions necessary to support that service should be as important – if not more – to your decision making processes. DECISIONS per SECOND There are very few pieces of infrastructure on which decisions are not made on a daily basis. Even the use of VLANs requires inspection and decision-making to occur on the simplest of switches. Identity and access management solutions must evaluate a broad spectrum of data in order to make a simple “deny” or “allow” decision and application delivery services make a variety of decisions across the security, acceleration and optimization demesne for every request they process. And because every solution is architected differently and comprised of different components internally, the speed and accuracy with which such decisions are made are variable and will certainly impact the ability of an architecture to meet or exceed business and operational service-level expectations. If you’re not testing that aspect of the delivery chain before you make a decision, you’re likely to either be pleasantly surprised or hopelessly disappointed in the decision making performance of those solutions. It’s time to start talking about decisions per second and performance of infrastructure in the context it’s actually used in data center architectures rather than as stand-alone, packet-processing, connection-oriented devices. And as we do, we need to remember that every network is different, carrying different amounts of traffic from different applications. That means any published performance numbers are simply guidelines and will not accurately represent the performance experienced in an actual implementation. However, the published numbers can be valuable tools in comparing products… as long as they are based on the same or very similar testing methodology. Before using any numbers from any vendor, understand how those numbers were generated and what they really mean, how much additional processing do they include (if any). When looking at published performance measurements for a device that will be making decisions and processing traffic, make sure you are using metrics based on performing that processing. 1024 Words: Ch-ch-chain of Fools On Cloud, Integration and Performance As Client-Server Style Applications Resurface Performance Metrics Must Include the API F5 Friday: Speeds, Feeds and Boats Data Center Feng Shui: Architecting for Predictable Performance Operational Risk Comprises More Than Just Security Challenging the Firewall Data Center Dogma Dispelling the New SSL Myth500Views0likes0CommentsF5 Friday: Zero-Day Apache Exploit? Zero-Problem
#infosec A recently discovered 0-day Apache exploit is no problem for BIG-IP. Here’s a couple of different options using F5 solutions to secure your site against it. It’s called “Apache Killer” and it’s yet another example of exploiting not a vulnerability, but a protocol’s behavior. UPDATE (8/26/2011) We're hearing that other Range-* HTTP headers are also vulnerable. Take care to secure against these potential attack vectors as well! In this case, the target is Apache and the “vulnerability” is in the way multiple ranges are handled by the Apache HTTPD server. The RANGE HTTP header is used to request one or more sub-ranges of the response, instead of the entire response entity. Ranges are sometimes used by thin clients (an example given was an eReader) that are memory constrained and may want to display just portions of the web page. Generally speaking, multiple byte ranges are not used very often. RFC 2616 Section 14.35.2 (Range retrieval request) explains: HTTP retrieval requests using conditional or unconditional GET methods MAY request one or more sub-ranges of the entity, instead of the entire entity, using the Range request header, which applies to the entity returned as the result of the request: Range = "Range" ":" ranges-specifier A server MAY ignore the Range header. However, HTTP/1.1 origin servers and intermediate caches ought to support byte ranges when possible, since Range supports efficient recovery from partially failed transfers, and supports efficient partial retrieval of large entities. The attack is simple. It’s a simple HTTP request with lots – and lots – of ranges. While this example uses the HEAD method, it can also be used with a GET. HEAD / HTTP/1.1 Host:xxxx Range:bytes=0-,5-1,5-2,5-3,… According to researchers testing the vulnerability, a successful attack requires a “modest” number of requests. BIG-IP SOLUTIONS There are several options to prevent this attack using BIG-IP solutions. HEADER SANITIZATION First, you can modify the HTTP profile to simply remove the Range header. HTTP header removal – and replacement – is a common means of manipulating request and response headers as a means to “fix” broken applications, clients, or enable other functionality. This is a form of header sanitization, used typically to remove non-compliant header values that may or may not be malicious, but are undesirable. The Apache suggestion is to remove any Range header with 5 or more values. Note that this could itself break clients whose functionality expects a specific data set as specified by the RANGE header. As it is a rarely used header it is unlikely to impact clients adversely, but caution is always advised. Collaborate with developers and understand the implications before arbitrarily removing HTTP headers that may be necessary to application functionality. HEADER VALUE SCRUBBING You can also use an iRule to scrub the headers. By inspecting and thus detecting large numbers of ranges in the RANGE header, you can subsequently handle the request based on your specific needs. Possible reactions include removal of the header, rejection of the request, redirection to a honey pot, or replacement of the header. Sample iRule code (always test before deploying into production!) when HTTP_REQUEST { # remove Range requests for CVE-2011-3192 if more than 5 ranges are requested if { [HTTP::header "Range"] matches_regex {bytes=(([0-9\- ])+,){5,}} } { HTTP::header remove Range } } Again, changing an HTTP header may have negative consequences on the functionality of the application and/or client, so tread carefully. BIG-IP ASM ATTACK SIGNATURE Another method of mitigation using BIG-IP solutions is to use a BIG-IP Application Security Manager (ASM) attack signature to detect and act upon an attack using this technique. The signature to add looks like: pcre:"/Range:[\t ]*bytes=(([0-9\- ])+,){5,}/Hi"; It is important to be aware of this exploit and how it works, as it is likely that once it is widely mitigated, attacks will begin (if they already are not) to explore the ways in which this header can be exploited. There are multiple “range” style headers, any of which may be vulnerable to similar exploitation, so it may be time to consider your current security strategy and determine whether the field of potential exploitable headers is such that a more negative approach (default deny unless specifically allowed) may be required to secure against future DoS attacks targeting HTTP headers. There are also alternative solutions available already, including this writeup from SpiderLabs with a link to an OWASP mod_security rule file for mitigations. Stay safe out there! Apache Warns Web Server Admins of DoS Attack Tool The Many Faces of DDoS: Variations on a Theme or Two How To Limit URI Length Without Recompiling Apache F5 Friday: Multi-Layer Security for Multi-Layer Attacks F5 Friday: Mitigating the ‘Padding Oracle’ Exploit for ASP.NET F5 Friday: The Art of Efficient Defense The Infrastructure 2.0–Security Connection F5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy500Views0likes1CommentF5 Friday: You’ll Catch More Bees with Honey(pots)
Catching bees with honey(pots) means they’re preoccupied with something other than stinging you. Pop quiz time…pencils ready? Go. Is it good or bad to block malicious requests? If your answer was “that depends on a lot of different factors” then pat yourself on the back. You done good. It may seem counterintuitive to answer “it’s bad block malicious requests” but depending on the attacker and his goals it may very well be just that. MISSION IMPOSSIBLE No security solution is a 100% guaranteed to prevent a breach (unless we’re talking about scissors) and most are simply designed to accomplish two things: buy you time and collect enough information that you can address the underlying vulnerability the attacker is attempting to exploit. Some solutions buy you more time than others, and some solutions provide the ability to collect more data than others, but in the end an attacker – like an application developer - with enough time and money and information will find a way to breach security. This is particularly true for new vulnerabilities and attack methodologies with which infosec professionals may be not familiar because, well, they’re newly discovered (or pre-discovered – someone has to be victim number one, after all) and there just isn’t a lot of information about it yet. Now, the reason that blocking those malicious requests could actually be serving the miscreant is that over time, a motivated attacker can learn a lot from the security solution, including how it works and what it’s specifically protecting. It can take weeks, but over time the attacker can build a profile of your security infrastructure based on the blocking of requests (mapping parameters and values and paths that caused the request to be blocked) and subsequently find a way around it. This is true regardless of whether the blocking mechanism is implemented in the application itself or in network-deployed security infrastructure. Your new mission then, should you choose to accept it, is to confuse the attacker for as long as possible, essentially buying you time to figure out what they’re trying to do. Then you can patch or deploy or notify the proper authorities and try to put a stop to the attacker as well as the attacks. One of the ways in which you can buy a lot more time for researching and implementing a solution against old or new attack methodologies is to employ a strategy that combines a WAF (web application firewall) and a honeypot. NOT POOH BEAR’S HONEYPOT In almost every story about Pooh Bear he complains about a “rumbly in his tummy” and then laments the fact that his honeypot is nearly empty. The honeypot you want to leverage is one that Pooh Bear would love: it automatically reloads itself to an untouched state on a specified interval. Virtualization has afforded organizations the ability to easily implement honeypots that are exact duplicates of production applications and keep them “pristine” across time by reloading the original virtual image. That comes in handy when it comes to confusing attackers. Imagine their frustration when their last attack appeared to be successful, depositing a file on the web server, and when they try to access it, it isn’t there. Ha! Good times, good times. But in order to accomplish this frustrating and protective strategy you first must have deployed a WAF capable of detecting an attack in progress (hint: it also must be deployed in reverse-proxy mode). And it has to be fairly accurate because you really don’t want to route legitimate users to a honeypot. That’d be frustrating, too, but you’ll get calls about that one. I can almost guarantee (with Heisenberg certainty) that an attacker won’t call you even if they do figure out they’re being routed to a honeypot. F5 BIG-IP Application Security Manager (ASM) can do this thing. Using a combination of techniques it can, with good accuracy, determine when the applications it protects are being attacked. It does so through a combination of inspecting the client, the requests, and the patterns of those requests. Once it is determined that it is under attack it raises an event in the underlying, shared application delivery platform (TMOS) that can be acted upon using F5’s network-side scripting technology, iRules. Using iRules you can do, well, just about anything – including randomly routing requests to a honeypot. The reason I say “random” is that any consistent reaction to a motivated attacker gives them more information upon which they can act to circumvent the security systems. Like timing-based attacks, one of the ways to successfully avoid compromise is to randomly change the response pattern. A simple approach would be to decide that one of every X requests will be randomly routed to the honeypot. Additionally you’d want to apply a rate-limiting policy to the attacker to ensure their attacks don’t overwhelm legitimate traffic. This approach impedes the ability of the attacker to consistently gather information about the underlying architecture and security infrastructure that can be used against you. Such a strategy may in fact hold off the attacker indefinitely, although there are no guarantees. More likely it’s just buying you even more time in which you can gather forensic evidence for the authorities (because you are doing that, right?) and figure out if there is, in fact, a vulnerability for which a solution exists and can be applied before it is exploited in your environment. This approach also works to mitigate bots (web scrapers). Yes, you could – upon detection – simply close their sessions but they’ll just open new ones. Yes, Javascript-based protections can usually detect a bot versus a human being, but it – like all security solutions – is not 100% foolproof. So instead of letting the web scraper know they’ve been caught, direct them to an application in the honeypot that contains a lot of irrelevant data. Assign them to a low rate class to limit their potential impact on the performance of the application, and let them download like there’s no tomorrow. Imagine their faces when they realize they’ve spent hours scraping what turns out to be useless data! Ha! Good times, good times. AGILE SECURITY is a PART of an AGILE INFRASTRUCTURE The ability to determine how best to respond to an attacker using network-side scripting is unique to BIG-IP ASM. The underlying integration with the underlying unified application delivery platform makes it possible for security professionals to take advantage of the core traffic management capabilities available on the BIG-IP platform, such as network-side scripting and rate shaping. Yes, you can leverage standard policies if you like, but the ability to customize if/when necessary makes your entire security infrastructure more agile; it affords the opportunity to respond to attacks and vulnerabilities on-demand without requiring modification to applications or the rest of the infrastructure. Combining the flexibility of virtualization, which provides an affordable mechanism for deploying a mirror image (pun intended) of production apps and thus building out a honeypot, with the ability to dynamically and flexibly route requests based on context atop the capability to detect the complex attack patterns applications are increasingly subjected to makes it possible to better protect data center resources without compromising availability or performance for legitimate users.470Views0likes1Comment