infosec
16 TopicsCOVID-19; Lessons from Security Incident Response
For the past few decades, threats of an 'epidemic' or 'pandemic' nature have loomed over digital assets and infrastructures. Do you remember the DDoS attack in 2002 that targeted a dozen of DNS root servers in the US and almost brought the Internet to its knees? What about the ILOVEYOU virus, which affected more than 10% of the world’s computers and caused an estimated $10 billion worth of damages? Essentially, any zero-day attack targetingthe core internet infrastructure and popular applications is potentially disastrous. The risk is even higher given the impressive volume and frequency of threats (an attack occurs every 39 seconds, on average 2,244 times a day, according to University of Maryland). As a result, security professionals have enhanced their security incident response (SIR) mechanisms. With slight variations, SIRs follow the guidanceof NIST SP 800-61 and generallyconsist of four phases: preparation; detection and analysis; containment, eradication and recovery; and post-incident activity. As the world responds to COVID-19, what can we learn from SIR? Early detection In SIR, as with COVID-19, precursors on a subject (clues that an incident may occur in the future) are difficult to identify. It is difficultto detect a potential COVID-19 patient untilhe starts exhibitingthe symptoms. The good news is that COVID-19 is easily detectable. Indicators such as symptoms and abnormal behaviorson human subjects are well known. However, spotting an incident early is essential to mitigate it effects. In AppSec, traffic is continuously monitored and inspected 24/7 in real time, using rules-based and anomaly-based detection to detect traffic posing a threat. Artificial intelligence (AI) and machine learning (ML) augment detection by improving accuracyrates while reducing false positives. Similarly, deploying significant efforts in early detection of COVID-19 patients. A higher capacity to monitor the population for COVID-19 symptoms (analogy of rules-based detection) can lead to early detection. Early Containment Once a threat is identified, it needs to be contained. Containmentis a mitigation strategy enacted while a permanent fix is being develop. The main goal of containment is to reduce the speed of contamination by isolating affected subjects. My coworker, Raymond Pompon, has illustrated containment strategies similarities between SIR and the COVID-19 response inContainment is Never Perfect. Despite the residual risk, as with early detection, early containment is essential at reducing the attack surface. Moreover, containment provides an environment for information gathering in point- and contextual-threat analysis. In that regards, SIR strategies includes sandboxes and honeypots systems to aid further threat analysis. Tightening Security Posture As a threat is identified and containment strategies are implemented, when facing a looming threat, it is common practice in SIR to perform risk assessment and review and enhance the security posture of non-infected systems. Even when a permanent fix is not yet available, a looming threat imposes the need for a review of the security architecture and processes to identify and mitigate possible inflections points, threat actors, and attack vectors. With COVID-19, similar process is being observed and should be encouraged because organizations and households are reviewing theirprotocols, hygiene, and safety policies. Communication Plan In SIR as with the COVID-19, managing communication is a big challenge. To quote World Health Organization Director-General Tedros Adhanom Ghebreyesus, "Our greatest enemy right now is not the virus itself; it's fear, rumors,and stigma." Large organizations concerned for theirreputation have developedspecific security incident communication plan that reflects the nature, scope, risk, and impact of an attack. Communications are typically delivered by security leadership in the organization to stakeholdersfollowing the guidance of transparency. Special considerationare taken when a communication could be use for reverse engineering and be detrimental to the organization. However, an interesting model is the way Vulnerability Disclosure operates in computer security. An independentresearcher or ethical hacker not affiliated with an organization could discover a threat or vulnerability and report it directly to the affected organization or through a bounty program. Using such communication channel, an organization can take mitigation action. In SIR, as with COVID-19, a collaborative communication approach could hep in early detection, early containment, and tightening of the security posture.859Views2likes0CommentsFive Information Security New Year's Resolutions
Shot this late last year for Information Security Buzz. What are five information security new year's resolutions for improving cyber security in 2016 and why? ps Related: New Year's Resolutions for the Security Minded Blueprint: 2016 is the Year SDN Finds its Home, and its Name is NFV 10 Cloud Security Predictions for 2016 2016 security predictions: Partnerships, encryption and behavior tracking Technorati Tags: 2016,resolutions,security,infosec,silva,f5 Connect with Peter: Connect with F5:265Views0likes0CommentsF5 Friday: BIG DDoS Umbrella powered by the HP VAN SDN Controller
#SDN #DDoS #infosec Integration and collaboration is the secret sauce to breaking down barriers between security and networking Most of the focus of SDN apps has been, to date, on taking layer 4-7 services and making them into extensions of the SDN controller. But HP is taking a different approach and the results are tantalizing. HP's approach, as indicated by the recent announcement of its HP SDN App Store, focuses more on the use of SDN apps as a way to enable the sharing of data across IT silos to create a more robust architecture. These apps are capable of analyzing events and information that enable the HP VAN SDN Controller to prescriptively modify network behavior to address issues and concerns that impact networks and the applications that traverse them. One such concern is security (rarely mentioned in the context of SDN). For example, how the network might response more rapidly to threat events, such as in progress DDoS attack. Which is where the F5 BIG DDoS Umbrella for HP's VAN (Virtual Application Network) comes into play. The focus of F5 BIG DDoS Umbrella is on mitigating in-progress attacks and the implementation depends on a collaboration between two disparate devices: the HP VAN SDN Controller and F5 BIG-IP. The two devices communicate via an F5 SDN app deployed on the HP VAN SDN Controller. The controller is all about the network, while the F5 SDN app is focused on processing and acting on information obtained from F5 security services deployed on the BIG-IP. This is collaboration and integration at work, breaking down barriers between groups (security and network operations) by sharing data and automating processes*. F5 BIG DDoS Umbrella The BIG DDoS Umbrella relies upon the ability of F5 BIG-IP to intelligently intercept, inspect and identity DDoS attacks in flight. BIG-IP is able to identify DDoS events targeting the network, application layers, DNS or SSL. Configuration (available as an iApp upon request) is flexible, enabling the trigger to be one, a combination of or all of the events. This is where collaboration between security and network operations is critical to ensure the response to a DDoS event meets defined business and operational goals. When BIG-IP identifies a threat, it sends the relevant information with a prescribed action to the HP VAN SDN Controller. The BIG DDoS Umbrella agent (the SDN "app") on the HP VAN SDN Controller processes the information, and once the location of entry for the attacker is isolated, the prescribed action is implemented on the device closest to the attacker. The BIG DDoS Umbrella App is free, and designed to extend the existing DDoS protection capabilities of BIG-IP to the edge of the network. It is a community framework which users may use, enhance or improve. Additional Resources: DDoS Umbrella for HP SDN AppStore - Configuration Guide HP SDN App Store - F5 BIG DDoS Umbrella App Community * If that sounds more like DevOps than SDN, you're right. It's kind of both, isn't it? Interesting, that...237Views0likes0CommentsF5 Synthesis: Hybrid SSL Offload
#SSL #webperf #infosec Now your services can take advantage of hardware acceleration even when they're deployed on virtual machines Way back in the day, when SSL offloading was young and relatively new, there were a variety of hardware, software and even architecture that arose to defeat the security penalty imposed by the requisite cryptographic functionality. Most commonly, we'd slap a PCI-card into a server, muck with the web server configuration (to load some shared objects) and voila! Instant performance boost via hardware acceleration. Later, an architectural approach that leveraged a network-based offload capability was introduced. This meant configuring an SSL offload appliance in a side (or one) arm configuration (common for caches and even load balancers back then) in which SSL traffic was routed to the offload appliance and decrypted before being sent on to the web or app server. You added some latency in the hairpin (or trombone, if you prefer) but that was always more than offset by the improvement of not letting the web server try to decrypt that data in the first place. We've come a long way since then and most often these days you'll find an application delivery controller (ADC) or an app proxy serving duty as cryptographic master of the application. Most ADCs are still far more efficient at handling SSL/TLS traffic because they've benefitted from Moore's Law in two places: the core system and the SSL acceleration hardware (which takes advantage of CPUs, too, in addition to custom hardware). Now comes the advent of the next generation of application delivery architectures which, necessarily, rely on a fabric-based approach and incorporate virtual appliances as well as traditional hardware. Services deployed on the hardware of course benefit from the availability of specialized SSL acceleration but the virtual appliances? Not so much. We (as in the corporate We) didn't like that much at all, especially given trends toward greater key lengths and the forthcoming HTTP 2.0 specification which, yes, requires SSL/TLS. That means a lot more apps are going to need SSL - but they aren't going to want the associated performance penalty that comes with it running on software. They may not be as important, but they aren't expendable. That's true whether the web server natively handles SSL or you move it off to a virtual ADC within the services fabric. All apps are important, of course, but we know that some are more important than others and thus are afforded the benefits of services deployed on faster performing hardware while others are relegated to virtual machines. We take our commitment with Synthesis to leave no application behind seriously and thus have introduced the industry's first hybrid SSL offload capability. Hybrid SSL Offload Hybrid SSL Offload was made available with the release of BIG-IP 11.6 and enables virtual editions of BIG-IP as well as less capable and legacy BIG-IP appliances and devices to harness the power of hardware to improve app performance through cryptographic acceleration. This has the added benefit of freeing up resources on virtual appliances to improve the overall performance and capacity of app services deployed on that virtual edition. In a nutshell, user requests are sent to the appropriate virtual ADC instance, which hosts all app services for an app except SSL. SSL is offloaded to a designated service running on a hardware platform that can take advantage of its targeted hardware acceleration. Using hybrid SSL offload within the Synthesis service fabric allows organizations to: •Achieve the maximum SSL performance of a virtual license •Free up Virtual Edition CPU utilization for other application services All together this means better app performance and capacity for services deployed on virtual editions. All applications need services and deserve optimal performance, even those that might otherwise by designated as "red shirt" apps by IT. F5 Synthesis continues to leave no application behind by ensuring every application has access to the services it needs, even when it means collaborating across device types.285Views0likes0CommentsThe man in your browser
#F5SOC #infosec He shouldn't be there you know. The keys to the digital kingdom are credentials. In no industry is this more true (and ultimately more damaging) than financial services. The sophistication of the attacks used to gather those credentials and thwart the increasingly complex authentication process that guards financial transactions is sometimes staggering. That's because they require not just stealth but coordination across multiple touch points in the transaction process. MITB is not a new technique. It was first identified as a potential path to financial theft back in 2005 when Augusto Paes de Barros presented it as part of his "The future of backdoors - worst of all worlds". MITB didn't receive its "official" title until it was so named in 2007 by Philipp Gühring. In 2008, Trojans with MITB capabilities began to surface: Zeus. Torpig. Clampi. Citadel. Most financial targeting Trojan malware are able to capture a wide variety of behavior and data as well as enabling remote control via VNC and RDP. They range from keylogging and grabbing and more recently have begun taking advantage of the mobile explosion in order to bypass multi-factor authentication requirements that leverage SMS to deliver OTP or TAN to legitimate users. These Trojans accomplish these feats using MITB to inject scripts into legitimate banking web applications to extract and deliver credentials to dropzones. These scripts are dangerous not only because of the amount of data they can collect; that's true of just about any Trojan that inserts itself into the user environment. These scripts are dangerous because they become part of the application logic. MITB essentially changes the client side of a web application, giving it potentially new and dangerous capabilities such as modifying the details of transactions in real time. You might think you'd notice that, but because it's in the browser and modifying the business logic it can hide those changes from you, at least for a few days. Financial institutions attempting to put a stop to these fraudulent activities often implement two-factor authentication. Logging in with simple credentials is not enough; a second password or code is delivered via a secondary channel such as via SMS in order to authenticate the user. But even this is not always enough. As our F5 SOC research team recently showed, Neverquest is able to abuse user's trust in their banking institutions against them. In addition to the typical MITB script injection designed to steal credentials, Neverquest attempts to coerce users into also installing an application on their mobile device, designed to capture and deliver secondary authentication codes and passwords as well. Successfully doing so means attackers can execute automated transactions against the users financial accounts. Given that 73% of users are unable to distinguish between real and fake popup messages (Sharek, et al 2008, "Failure to Recognize Fake Internet Popup Warning Messages") the potential for Neverquest and similar Trojans to succeed in such efforts is more likely than not, particularly when those popups are presented by a trusted site such as a financial institution. The key to detecting these script-injecting, app modifying monsters is to understand the state of the web application page at the time it's delivered - before the Trojan has a chance to modify it - as well as monitoring for duplicate communication initiated from the web page. These are both methods used by web anti-fraud solutions to detect infected clients. A small protective script is included in each and every page that can detect attempts to modify the logic as well as notice duplicate communications and can notify the user immediately. You can learn more about Neverquest, how it works and how to mitigate it from our F5 SOC analysis of the malware.263Views0likes0CommentsF5 SOC Malware Summary Report: Neverquest
#F5SOC #malware #2FA #infosec The good news is that compromising #2FA requires twice the work. The bad news? Malware can do it. That malware is a serious problem, particularly for organizations that deal with money, is no surprise. Malware is one of the primary tools used by fraudsters to commit, well, fraud. In 2013, the number of cyberattacks involving malware designed to steal financial data rose by 27.6% to reach 28.4 million according to noted security experts at Kaspersky. Organizations felt the result; 36% of financial institutions admit to experiencing ACH/wire fraud (2013 Faces of Fraud Survey). To protect against automated transactions originating from infected devices, organizations often employ two-factor authentication (2FA) that leverages OTP (one time passwords) or TAN (transaction authorization numbers) via a secondary channel such as SMS to (more) confidently verify identity. 2FA systems that use a secondary channel (a device separate from the system on which the transaction is initiated) are more secure, naturally, than those that transmit the second factor over a channel that can be used from the initiating system, a la an e-mail. While 2FA that use two disparate systems are, in fact, more secure, they are not foolproof, as malware like Neverquest has shown. Neverquest activity has been seen actively in the wild, meaning despite the need to compromise two client devices - usually a PC/laptop and a smartphone - it has been successful at manipulating victims into doing so. The primary infection occurs via the PC, which is a lot less difficult these days thanks to the prevalence of infected sites. But the secondary infection requires the victim to knowingly install an app on their phone, which means convincing them they should do so. This is where script injection comes in handy. Malware of this ilk modify the web app by injecting a script that changes the behavior and/or look of the page. Even the most savvy browsers are unlikely to be aware of such changes as they occur "under the hood" at the real, official site. Nothing about the URI or host changes, which means all appears as normal. The only way to detect such injections is to have prior knowledge of what the page should look like - down to the code level. The trust a victim has for the banking site is later exploited with a popup indicating they should provide their phone number and download an app. As it appears to be a valid request coming from their financial institution, victims may very well be tricked into doing so. And then Neverquest has what it needs - access to SMS messages over which OTP and/or TAN are transmitted. The attacker can then initiate automated transactions and confirm them by intercepting the SMS messages. Voila. Fraud complete. We (as in the corporate We) rely on our F5 SOC (Security Operations Center) team to analyze malware to understand how they compromise systems and enable miscreants to carry out their fraudulent goals. In July, the F5 SOC completed its analysis of Neverquest and has made its detailed results available. You can download the full technical analysis here on DevCentral. We've also made available a summary analysis that provides an overview of the malware, how it works, and how its risk can be mitigated. You can get that summary here on DevCentral as well. The F5 SOC will continue to diligently research, analyze and document malware in support of our security efforts and as a service to the broader community. We hope you find it useful and informative, and look forward to sharing more analysis in the future. You can get the summary analysis here, and the full technical analysis here. Additional Resources: F5 SOC on DevCentral F5 WebSafe Service F5 Web Fraud Protection Reference Architecture322Views0likes0CommentsHeartbleed and Perfect Forward Secrecy
Get the latest updates on how F5 mitigates HeartbleedGet the latest updates on how F5 mitigates Heartbleed #heartbleed #PFS #infosec Last week was a crazy week for information security. That's probably also the understatement of the year. With the public exposure of Heartbleed, everyone was talking about what to do and how to do it to help customers and the Internet, in general, deal with the ramifications of such a pervasive vulnerability. If you still aren't sure, we have some options available, check them out here: The most significant impact on organizations was related to what amounts to the invalidation of the private keys used to ensure secure communications. Researchers found that not only did exploitation of the vulnerability result in the sharing of passwords or sensitive data, but the keys to the organization's kingdom. That meant, of course, that anyone who'd managed to get them could decrypt any communication they'd snatched over the past couple of years while the vulnerable versions of OpenSSL were in use. Organizations must not not only patch hundreds (or thousands) of servers, but they must also go through the process of obtaining new keys. That's not going to be simple - or cheap. That's all because of the way PKI (Public key infrastructure) works. Your private key. And like the One Ring, Gandalf's advice to Frodo applies to organizations: keep it secret; keep it safe. What Heartbleed did was to make that impossible. There's really no way to know for sure how many private keys were exposed, because the nature of the vulnerability was such that exploitation left no trail, no evidence, no nothing. No one knows just what was exposed, only what might have been exposed. And that is going to drive people to assume that keys were compromised because playing with a potentially compromised key is ... as insane as Gollum after years of playing with a compromised Ring. There's no debating this is the right course of action and this post is not about that anyway, not really. Post-mortem blogs and discussions are generally around how to prevent similar consequences in the future, and this is definitely that kind of post. Now, it turns out that in the last year or so (and conspiracy theorists will love this) support for PFS (Perfect Forward Secrecy) has been introduced by a whole lot of folks. Both Microsoft and Twitter introduced support for the protocol late last year, and many others have followed suit. PFS was driven by a desire for providers to protect consumer privacy from government snooping, but it turns out that PFS would have done that as well in the case of Heartbleed being exploited. Even though PFS relies on a single private key, just as current encryption mechanisms, what PFS (and even FS) do with that key means that even if the key is compromised, it's not going to open up the world to the attacker. PFS uses the private key to generate what are called ephemeral keys; that is, they're keys based on the original but unique to either the conversation or a few, selected messages within a conversation, depending on the frequency with which ephemeral keys are generated.That means you can't use the private key to decrypt communication that's been secured using an ephemeral key. They're only related, not the same, and cryptography is pretty unforgiving when it comes to even a single bit difference in the data. In cryptography, forward secrecy (also known as perfect forward secrecy or PFS [1] ) is a property of key-agreement protocols ensuring that a session key derived from a set of long-term keys will not be compromised if one of the long-term keys is compromised in the future. The key used to protect transmission of data must not be used to derive any additional keys, and if the key used to protect transmission of data was derived from some other keying material, that material must not be used to derive any more keys. Thus, compromise of a single key will permit access only to data protected by a single key. -- Wikipedia, Forward secrecy This is the scenario for which PFS was meant to shine: the primary key is compromised, yet if enabled, no conversations (or transactions or anything else) can be decrypted with that key. Similarly, if the key currently being used to encrypt communications is compromised, it can only impact the current communication - no one else. PFS has only recently begun being supported, more recently than Heartbleed has been in existence. But now that we know it does exist, and the very real threat of vulnerabilities that compromise consumer privacy and organizational confidentiality, we should take a look at PFS and how it might benefit us to put it in place - before we find out about the next bleeding organ.364Views0likes0CommentsSecurity Shortage? Look Internal.
There has been an increasing amount of commentary about the growing shortage of Information Security folks. While the reasons for this shortage are manifold and easily explained, that doesn’t change the fact that it exists. Nor the fact that natural sources may well be causing it to worsen. Here’s why we’re where we are: Information Security is a thankless job. Literally thankless. If you do enough to protect the organization, everyone hates you. If you don’t do enough to protect the organization, everyone hates you. Information Security is hard. Attacks are constantly evolving, and often sprung out of the blue. While protecting against three threats that the InfoSec professionals have ferreted out, a fourth blindsides them. Information Security is complex. Different point, but similar to the one above. You can’t just get by in InfoSec. You have to know some seriously deep things, and be constantly learning. Information Security is demanding. When the attackers come on a global clock, defenders have to be ready to respond on one. That means there are limits to “time off”, counting both a good nights’ sleep and vacations as casualties. The shrinking pool has made the last point worse. With fewer people to share the load, there is more load for each person to carry – more call, more midnight response, more everything. Making do with the best security staff you can find may well be killing the rest of your InfoSec team. If “the best you can find” isn’t good enough, others must pick up the slack. And those last two points are the introduction to today’s thought. Stop looking for the best InfoSec people you can find. Start training good internal employees in InfoSec. You all know this is the correct approach. No matter how good you are at Information Security, familiarity with the network or systems, or applications of your specific organization is at least as important. Those who manage the organizations’ IT assets know where the weaknesses are and can quickly identify new threats that pose a real risk to your data. The InfoSec needs of a bank, for example, are far better served by someone familiar with both banking and this bank than by someone who knows Information Security but learned all that they know at a dog pound. The InfoSec needs of the two entities are entirely different. And there’s sense to this idea. You have a long history of finding good systems admins or network admins and training them in your organizations’ needs, but few organizations have a long history in hiring security folks and doing the same. With a solid training history and a deeper available talent pool, it just makes sense to find interested individuals within the organization and get them security training, backfilling their positions with the readily available talent out there. Will it take time to properly vet and train those interested? Of course it will. Will it take longer than it would take to inform an InfoSec specialist in the intricacies of your environment? Probably not. SharePoint is SharePoint, and how to lock it down is well documented, but that app you had custom developed by a coding house that is now gone? That’s got a way different set of parameters. Of course this option isn’t for everyone, but combined with automating what is safe to automate (which is certainly not everything, or even the proverbial lion’s share), you’ll have a stronger security posture in the long run, and these are people who already know your network – and perhaps more importantly your work environment - but have an interest in InfoSec. Give them a shot, you might be pleased with the results. As to the bullet points above? You’ll have to address those long-term too. They’re why you’re struggling to find InfoSec people in the first place. Though some of them are out of your control, you can offer training and places like DefCon to minimize them. Related Articles and Blogs: The InfoSec Prayer Authorization is the New Black for Infosec The InfoSec Conundrum. Keep Playing Until You Lose. WILS: InfoSec Needs to Focus on Access not Protection F5 News - Threat mitigation The Cost of Ignoring 'Non-Human' Visitors F5 News - web application firewall177Views0likes0CommentsF5 Friday: When Firewalls Fail…
New survey shows firewalls falling to application and network DDoS with alarming frequency… With the increasing frequency of successful DDoS attacks there has come a few studies focusing on organizational security posture – readiness, awareness, and incident rate as well as costs of successful attacks. When Applied Research conducted a study this fall on the topic, it came with some expected results but also uncovered some disturbing news – firewalls fail. Often. More often, in fact, than we might like to acknowledge. That’s troubling because it necessarily corresponds to the success rate of attacks and, interestingly, the increasing success of multi-layer attacks. The results were not insignificant – 36% of respondents indicated failure of a firewall due to an application layer DDoS attack while 42% indicated failure as a result of a network layer DDoS attack. That makes the 11 in 12 who said traditional safeguards are not enough a reasonable conclusion. There is a two-part challenge in addressing operational risk when it comes to mitigating modern attacks. First, traditional firewalls aren’t able to withstand the flood of traffic being directed their way and second, stateful firewalls – even with deep packet inspection capabilities – aren’t adequately enabled to detect and respond to application layer DDoS attacks. Because of this, stateful firewalls are simply melting down in the face of overwhelming connections and when they aren’t, they’re allowing the highly impactful application layer DDoS attacks to reach web and application services and shut down them. The result? An average cost to organizations of $682,000 in the past twelve months. Lost productivity (50%) and loss of data (43%) topped the sources of financial costs, but loss of revenue (31%) and loss of customer trust (30%) were close behind, with regulatory fines cited by 24% of respondents as a source of financial costs. A new strategy is necessary to combat the new techniques of attackers. Today’s modern threat stack spans the entire network stack – from layer one to layer seven. It is no longer enough to protect against one attack or even three, it’s necessary to mitigate the entire multi-layer threat spectrum in a more holistic, intelligent way. Only 8% of respondents believe traditional stateful firewalls are enough to defend against the entire landscape of modern attacks. Nearly half see application delivery controllers as able to replace many or most traditional safeguards. Between one-third and one-half of respondents are already doing just that, with 100% of those surveyed discussing the possibility. While sounding perhaps drastic, it makes sense to those who understand the strategic point of control in which the application delivery controller topologically occupies, and its ability to intercept, inspect, and interpret the context of every request – from the network to the application layers. Given that information, an ADC is eminently better positioned to detect and react to the application DDoS attacks that so easily bypass and ultimately overwhelm traditional firewall solutions. Certainly it’s possible to redress application layer DDoS attacks with yet another point solution, but it has always been the case that every additional device through which traffic must pass between the client and the server introduces not only latency – which impedes optimal performance – but another point of failure. It is much more efficient in terms of performance and provides a higher level of fault tolerance to reduce the number of devices in the path between client and server. An advanced application delivery platform like BIG-IP, with an internally integrated, high-speed interconnect across network and application-focused solutions, provides a single point at which application and network layer protections can be applied, without introducing additional points of failure or latency. The methods of attackers are evolving, shouldn’t your security strategy evolve along with it? 2011 ADC Security Survey Resources: F5 Network 2011 ADC Security Study – Findings Report 2011 ADC Security Study 2011 ADC Security Study – Infographic Study Finds Traditional Security Safeguards Failing, Application Delivery Controllers Viewed as an Effective Alternative155Views0likes0CommentsF5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy
Pop Quiz: In recent weeks, which of the following attack vectors have been successfully used to breach major corporation security? (choose all that apply) Phishing Parameter tampering SQL Injection DDoS SlowLoris Data leakage If you selected them all, give yourself a cookie because you’re absolutely right. All six of these attacks have successfully been used recently, resulting in breaches across the globe: International Monetary Fund US Government – Senate CIA Citibank Malaysian Government Sony Brazilian governmentand Petrobraslatest LulzSecvictims That’s no surprise; attacks are ongoing, constantly. They are relentless. Many of them are mass attacks with no specific target in mind, others are more subtle, planned and designed to do serious damage to the victim. Regardless, these breaches all have one thing in common: the breach was preventable. At issue is the reality that attackers today have moved up the stack and are attacking in the data center’s security blind spot: the application layer. Gone are the days of blasting the walls of the data center with packets to take out a site. Data center interconnects have expanded to the point that it’s nearly impossible to disrupt network infrastructure and cause an outage without a highly concerted and distributed effort. It does happen, but it’s more frequently the case that attackers are moving to highly targeted, layer 7 attacks that are far more successful using far fewer resources with a much smaller chance of being discovered. The security-oriented infrastructure traditionally relied upon to alert on attacks is blind; unable to detect layer 7 attacks because they don’t appear to be attacks. They look just like “normal” users. The most recent attack, against www.cia.gov, does not appear to be particularly sophisticated. LulzSec described that attack as a simple packet flood, which overwhelms a server with volume. Analysts at F5, which focuses on application security and availability, speculated that it actually was a Slowloris attack, a low-bandwidth technique that ties up server connections by sending partial requests that are never completed. Such an attack can come in under the radar because of the low volume of traffic it generates and because it targets the application layer, Layer 7 in the OSI model, rather than the network layer, Layer 3. [emphasis added] -- Ongoing storm of cyberattacks is preventable, experts say It isn’t the case that organizations don’t have a sound security strategy and matching implementation, it’s that the strategy has a blind spot at the application layer. In fact, it’s been the evolution of network and transport layer security success that’s almost certainly driven attackers to climb higher up the stack in search of new and unanticipated (and often unprotected) avenues of opportunity. ELIMINATING the BLIND SPOT Too often organizations – and specifically developers – hear the words “layer 7” anything and immediately take umbrage at the implication they are failing to address application security. In many situations it is the application that is vulnerable, but far more often it’s not the application – it’s the application platform or protocols that is the source of contention, neither of which a developer has any real control over. Attacks designed to specifically leech off resources – SlowLoris, DDoS, HTTP floods – simply cannot be noticed or prevented by the application itself. Neither are these attacks noticed or prevented by most security infrastructure components because they do not appear to be attacks. In cases where protocol (HTTP) exploitation is leveraged, it is not possible to detect such an attack unless the right information is available in the right place at the right time. The right place is a strategic point of control. The right time is when the attack begins. The right information is a combination of variables, the context carried with every request that imparts information about the client, network, and server-side status. If a component can see that a particular user is sending data at a rate much slower than their network connection should allow, that tells the component it’s probably an application layer attack that then triggers organizational policies regarding how to deal with such an attack: reject the connection, shield the application, notify an administrator. Only a component that is positioned properly in the data center, i.e. in a strategic point of control, can properly see all the variables and make such a determination. Only a component that is designed specifically to intercept, inspect and act on data across the entire network and application stack can detect and prevent such attacks from being successfully carried out. BIG-IP is uniquely positioned – topologically and technologically – to address exactly these kinds of multi-layer attacks. Whether the strategy to redress such attacks is “Inspect and Reject” or “Buffer and Wait”, the implementation using BIG-IP simply makes sense. Because of its position in the network – in front of applications, between clients and servers – BIG-IP has the visibility into both internal and external variables necessary. With its ability to intercept and inspect and then act upon the variables extracted, BIG-IP is perfectly suited to detecting and preventing attacks that normally wind up in most infrastructure’s blind spot. This trend is likely to continue, and it’s also likely that additional “blind spots” will appear as consumerization via tablets and mobile devices continues to drive new platforms and protocols into the data center. Preventing attacks from breaching security and claiming victory – whether the intent is to embarrass or to profit – is the goal of a comprehensive organizational security strategy. That requires a comprehensive, i.e. multi-layer, security architecture and implementation. One without any blind spots in which an attacker can sneak up on you and penetrate your defenses. It’s time to evaluate your security strategy and systems with an eye toward whether such blind spots exist in your data center. And if they do, it’s well past time to do something about it. More Info on Attack Prevention on DevCentral DevCentral Security Forums DDoS Attack Protection in BIG-IP Local Traffic Manager DDoS Attack Protection in BIG-IP Application Security Manager196Views0likes0Comments