infosec
16 TopicsWARNING: Security Device Enclosed
If you aren’t using all the security tools at your disposal you’re doing it wrong. How many times have you seen an employee wave on by a customer when the “security device enclosed” in some item – be it DVD, CD, or clothing – sets off the alarm at the doors? Just a few weeks ago I heard one young lady explain the alarm away with “it must have be the CD I bought at the last place I was at…” This apparently satisfied the young man at the doors who nodded and turned back to whatever he’d been doing. All the data the security guy needed to make a determination was there; he had all the context necessary in which to analyze the situation and make a determination based upon that information. But he ignored it all. He failed to leverage all the tools at his disposal and potentially allowed dollars to walk out the door. In doing so he also set a precedent and unintentionally sent a message to anyone who really wanted to commit a theft: I ignore warning signs, go ahead.1.6KViews0likes2CommentsCOVID-19; Lessons from Security Incident Response
For the past few decades, threats of an 'epidemic' or 'pandemic' nature have loomed over digital assets and infrastructures. Do you remember the DDoS attack in 2002 that targeted a dozen of DNS root servers in the US and almost brought the Internet to its knees? What about the ILOVEYOU virus, which affected more than 10% of the world’s computers and caused an estimated $10 billion worth of damages? Essentially, any zero-day attack targetingthe core internet infrastructure and popular applications is potentially disastrous. The risk is even higher given the impressive volume and frequency of threats (an attack occurs every 39 seconds, on average 2,244 times a day, according to University of Maryland). As a result, security professionals have enhanced their security incident response (SIR) mechanisms. With slight variations, SIRs follow the guidanceof NIST SP 800-61 and generallyconsist of four phases: preparation; detection and analysis; containment, eradication and recovery; and post-incident activity. As the world responds to COVID-19, what can we learn from SIR? Early detection In SIR, as with COVID-19, precursors on a subject (clues that an incident may occur in the future) are difficult to identify. It is difficultto detect a potential COVID-19 patient untilhe starts exhibitingthe symptoms. The good news is that COVID-19 is easily detectable. Indicators such as symptoms and abnormal behaviorson human subjects are well known. However, spotting an incident early is essential to mitigate it effects. In AppSec, traffic is continuously monitored and inspected 24/7 in real time, using rules-based and anomaly-based detection to detect traffic posing a threat. Artificial intelligence (AI) and machine learning (ML) augment detection by improving accuracyrates while reducing false positives. Similarly, deploying significant efforts in early detection of COVID-19 patients. A higher capacity to monitor the population for COVID-19 symptoms (analogy of rules-based detection) can lead to early detection. Early Containment Once a threat is identified, it needs to be contained. Containmentis a mitigation strategy enacted while a permanent fix is being develop. The main goal of containment is to reduce the speed of contamination by isolating affected subjects. My coworker, Raymond Pompon, has illustrated containment strategies similarities between SIR and the COVID-19 response inContainment is Never Perfect. Despite the residual risk, as with early detection, early containment is essential at reducing the attack surface. Moreover, containment provides an environment for information gathering in point- and contextual-threat analysis. In that regards, SIR strategies includes sandboxes and honeypots systems to aid further threat analysis. Tightening Security Posture As a threat is identified and containment strategies are implemented, when facing a looming threat, it is common practice in SIR to perform risk assessment and review and enhance the security posture of non-infected systems. Even when a permanent fix is not yet available, a looming threat imposes the need for a review of the security architecture and processes to identify and mitigate possible inflections points, threat actors, and attack vectors. With COVID-19, similar process is being observed and should be encouraged because organizations and households are reviewing theirprotocols, hygiene, and safety policies. Communication Plan In SIR as with the COVID-19, managing communication is a big challenge. To quote World Health Organization Director-General Tedros Adhanom Ghebreyesus, "Our greatest enemy right now is not the virus itself; it's fear, rumors,and stigma." Large organizations concerned for theirreputation have developedspecific security incident communication plan that reflects the nature, scope, risk, and impact of an attack. Communications are typically delivered by security leadership in the organization to stakeholdersfollowing the guidance of transparency. Special considerationare taken when a communication could be use for reverse engineering and be detrimental to the organization. However, an interesting model is the way Vulnerability Disclosure operates in computer security. An independentresearcher or ethical hacker not affiliated with an organization could discover a threat or vulnerability and report it directly to the affected organization or through a bounty program. Using such communication channel, an organization can take mitigation action. In SIR, as with COVID-19, a collaborative communication approach could hep in early detection, early containment, and tightening of the security posture.845Views2likes0CommentsHeartbleed and Perfect Forward Secrecy
Get the latest updates on how F5 mitigates HeartbleedGet the latest updates on how F5 mitigates Heartbleed #heartbleed #PFS #infosec Last week was a crazy week for information security. That's probably also the understatement of the year. With the public exposure of Heartbleed, everyone was talking about what to do and how to do it to help customers and the Internet, in general, deal with the ramifications of such a pervasive vulnerability. If you still aren't sure, we have some options available, check them out here: The most significant impact on organizations was related to what amounts to the invalidation of the private keys used to ensure secure communications. Researchers found that not only did exploitation of the vulnerability result in the sharing of passwords or sensitive data, but the keys to the organization's kingdom. That meant, of course, that anyone who'd managed to get them could decrypt any communication they'd snatched over the past couple of years while the vulnerable versions of OpenSSL were in use. Organizations must not not only patch hundreds (or thousands) of servers, but they must also go through the process of obtaining new keys. That's not going to be simple - or cheap. That's all because of the way PKI (Public key infrastructure) works. Your private key. And like the One Ring, Gandalf's advice to Frodo applies to organizations: keep it secret; keep it safe. What Heartbleed did was to make that impossible. There's really no way to know for sure how many private keys were exposed, because the nature of the vulnerability was such that exploitation left no trail, no evidence, no nothing. No one knows just what was exposed, only what might have been exposed. And that is going to drive people to assume that keys were compromised because playing with a potentially compromised key is ... as insane as Gollum after years of playing with a compromised Ring. There's no debating this is the right course of action and this post is not about that anyway, not really. Post-mortem blogs and discussions are generally around how to prevent similar consequences in the future, and this is definitely that kind of post. Now, it turns out that in the last year or so (and conspiracy theorists will love this) support for PFS (Perfect Forward Secrecy) has been introduced by a whole lot of folks. Both Microsoft and Twitter introduced support for the protocol late last year, and many others have followed suit. PFS was driven by a desire for providers to protect consumer privacy from government snooping, but it turns out that PFS would have done that as well in the case of Heartbleed being exploited. Even though PFS relies on a single private key, just as current encryption mechanisms, what PFS (and even FS) do with that key means that even if the key is compromised, it's not going to open up the world to the attacker. PFS uses the private key to generate what are called ephemeral keys; that is, they're keys based on the original but unique to either the conversation or a few, selected messages within a conversation, depending on the frequency with which ephemeral keys are generated.That means you can't use the private key to decrypt communication that's been secured using an ephemeral key. They're only related, not the same, and cryptography is pretty unforgiving when it comes to even a single bit difference in the data. In cryptography, forward secrecy (also known as perfect forward secrecy or PFS [1] ) is a property of key-agreement protocols ensuring that a session key derived from a set of long-term keys will not be compromised if one of the long-term keys is compromised in the future. The key used to protect transmission of data must not be used to derive any additional keys, and if the key used to protect transmission of data was derived from some other keying material, that material must not be used to derive any more keys. Thus, compromise of a single key will permit access only to data protected by a single key. -- Wikipedia, Forward secrecy This is the scenario for which PFS was meant to shine: the primary key is compromised, yet if enabled, no conversations (or transactions or anything else) can be decrypted with that key. Similarly, if the key currently being used to encrypt communications is compromised, it can only impact the current communication - no one else. PFS has only recently begun being supported, more recently than Heartbleed has been in existence. But now that we know it does exist, and the very real threat of vulnerabilities that compromise consumer privacy and organizational confidentiality, we should take a look at PFS and how it might benefit us to put it in place - before we find out about the next bleeding organ.349Views0likes0CommentsF5 SOC Malware Summary Report: Neverquest
#F5SOC #malware #2FA #infosec The good news is that compromising #2FA requires twice the work. The bad news? Malware can do it. That malware is a serious problem, particularly for organizations that deal with money, is no surprise. Malware is one of the primary tools used by fraudsters to commit, well, fraud. In 2013, the number of cyberattacks involving malware designed to steal financial data rose by 27.6% to reach 28.4 million according to noted security experts at Kaspersky. Organizations felt the result; 36% of financial institutions admit to experiencing ACH/wire fraud (2013 Faces of Fraud Survey). To protect against automated transactions originating from infected devices, organizations often employ two-factor authentication (2FA) that leverages OTP (one time passwords) or TAN (transaction authorization numbers) via a secondary channel such as SMS to (more) confidently verify identity. 2FA systems that use a secondary channel (a device separate from the system on which the transaction is initiated) are more secure, naturally, than those that transmit the second factor over a channel that can be used from the initiating system, a la an e-mail. While 2FA that use two disparate systems are, in fact, more secure, they are not foolproof, as malware like Neverquest has shown. Neverquest activity has been seen actively in the wild, meaning despite the need to compromise two client devices - usually a PC/laptop and a smartphone - it has been successful at manipulating victims into doing so. The primary infection occurs via the PC, which is a lot less difficult these days thanks to the prevalence of infected sites. But the secondary infection requires the victim to knowingly install an app on their phone, which means convincing them they should do so. This is where script injection comes in handy. Malware of this ilk modify the web app by injecting a script that changes the behavior and/or look of the page. Even the most savvy browsers are unlikely to be aware of such changes as they occur "under the hood" at the real, official site. Nothing about the URI or host changes, which means all appears as normal. The only way to detect such injections is to have prior knowledge of what the page should look like - down to the code level. The trust a victim has for the banking site is later exploited with a popup indicating they should provide their phone number and download an app. As it appears to be a valid request coming from their financial institution, victims may very well be tricked into doing so. And then Neverquest has what it needs - access to SMS messages over which OTP and/or TAN are transmitted. The attacker can then initiate automated transactions and confirm them by intercepting the SMS messages. Voila. Fraud complete. We (as in the corporate We) rely on our F5 SOC (Security Operations Center) team to analyze malware to understand how they compromise systems and enable miscreants to carry out their fraudulent goals. In July, the F5 SOC completed its analysis of Neverquest and has made its detailed results available. You can download the full technical analysis here on DevCentral. We've also made available a summary analysis that provides an overview of the malware, how it works, and how its risk can be mitigated. You can get that summary here on DevCentral as well. The F5 SOC will continue to diligently research, analyze and document malware in support of our security efforts and as a service to the broader community. We hope you find it useful and informative, and look forward to sharing more analysis in the future. You can get the summary analysis here, and the full technical analysis here. Additional Resources: F5 SOC on DevCentral F5 WebSafe Service F5 Web Fraud Protection Reference Architecture314Views0likes0CommentsWindow Coverings and Security
Note: While talking about this post with Lori during a break, it occurred to me that you might be thinking I meant “MS Windows”. Not this time, but that gives me another blog idea… And I’ll sneak in the windows –> Windows simile somewhere, no doubt. Did you ever ponder the history of simple things like windows? Really? They evolved from open spaces to highly complex triple-paned, UV resistant, crank operated monstrosities. And yet they serve basically the same purpose today that they did when they were just openings in a wall. Early windows were for ventilation and were only really practical in warm locales. Then shutters came along, which solved the warm/cold problem and kept rain off the bare wood or dirt floors, but weren’t very air tight. So to resolve that problem, a variety of materials from greased paper to animal hides were used to cover the holes while letting light in. This progression was not chronologically linear, it happened in fits and starts, with some parts of the world and social classes having glass windows long before the majority of people could afford it. When melted sand turned out to be relatively see-through though, the end was inevitable. Glass was placed into windows so the weather stayed mostly out while the sun came in. The ability to open windows helped to “air out” a residence or business on nice warm days, and closing them avoided excessive heat loss on cold days. At some point, screens came along that kept bugs and leaves out when they were open. Then artificial glass and double-paned windows came along, and now there are triple paned windows that you can buy with blinds built into the frame, that you can open fully, flip down, and clean the outside of without getting a ladder and taking a huge chunk of your day. Where are windows headed next? I don’t know. This development of seemingly unrelated things –screens and artificial glass and crankable windows – came about because people were trying to improve their environment. And that, when it comes down to it, is why we see advancement in any number of fields. In IT security, we have Web Application Firewalls to keep application-targeting attacks out, while we have SSL to keep connections secure, and we have firewalls to keep generic attacks out, while deploying anti-virus to catch anything that makes it through. And that’s kind of like the development of windows, screens, awnings or curtains… All layers built up through experience to tackle the problem of letting the good (sunshine) in, while keeping the bad (weather, dust, cold) out. Curtains even provide an adjustable filter for sunlight to come through. Open them to get more light in, close them to get less… Because there is a case where too much of a good thing can be bad. Particularly if your seat at the dining room table is facing the window and the window is facing directly east or west. We’re at a point in the evolution of corporate security where we need to deploy these various technologies together to do much the same with our public network that windows do with the outside. Filter out the bad in its various forms and allow the good in. Even have the ability to crank down on the good so we can avoid getting too much of a good thing. Utilizing an access solution to allow your employees access to the systems they require from anywhere or any device enables the business to do their job, while protecting against any old hacker hopping into your systems – it’s like a screen that allows the fresh air in, but filters out the pests. Utilizing a solution that can protect publicly facing applications from cross site scripting and SQL injection attacks is also high on the list of requirements – or should be. Even if you have policies to force your developers into checking for such attacks in all of their code, you still have purchased apps that might need exposing, or a developer might put in an emergency fix to a bug that wasn’t adequately security tested. It’s just a good idea to have this functionality in the network. That doesn’t even touch upon certification and audit reasons for running one, and they are perhaps the biggest drivers. Since I mentioned compliance, a tool that offers reporting is like when the sun shining in the window makes things too warm. You know when you need to shut the curtains – or tighten your security policy, as the case may be. XML firewalls are handy when you’re using XML as a communications method and want to make certain that a hacker didn’t mock up anything from an SQL Injection attack hidden in XML to an “XML bomb” type attack, and when combined with access solutions and web application firewalls, they’re another piece of our overall window(s) covering. If you’re a company whose web presence is of utmost importance, or one where a sizeable or important part of your business is conducted through your Internet connection, then DoS/DDoS protection is just plain and simply a good idea. Indeed, if your site isn’t available, it doesn’t matter why, so DDoS protection should be on the mandatory checklist. SSL encryption is a fact of life in the modern world, and another one of those pieces of the overall window(s) covering that allows you to communicate with your valid users but shut out the invalid or unwanted ones. If you have employees accessing internal systems, or customers making purchases on your website, SSL encryption is pretty much mandatory. If you don’t have either of those use cases, there are still plenty of good reasons to keep a secure connection with your users, and it is worth considering, if you have access to the technology and the infrastructure to handle it. Of course, it is even cooler if you can do all of the above and more on a single high-performance platform designed for application delivery and security. Indeed, a single infrastructure point that could handle these various tasks would be very akin to a window with all of the bells and whistles. It would keep out the bad, let in the good, and through the use of policies (think of them as cur tains) allow you to filter the good so that you are not letting too much in. That platform would be F5BIG-IPLTM, ASM, and APM. Maybe with some EDGE Gateways thrown in there if you have remote offices. All in one place, all on a single bit of purpose-built high-performance Application Delivery Network hardware. It is indeed worth considering. In this day and age, the external environment of the Internet is hostile, make certain you have all of the bits of security/window infrastructure necessary to keep your organization from being the next corporation to have to send data breach notifications out. Not all press is good press, and that’s one we’d all like to avoid. Review your policies, review your infrastructure, make sure you’re getting the most from your security architecture, and go home at the end of the day knowing you’re protecting corporate assets AND enabling business users. Because in the end, that’s all part of IT’s job. Just remember to go back and look it over again next year if you are one of the many companies who doesn’t have dedicated security staff watching this stuff. It’s an ugly Internet out there, you and your organization be careful…289Views0likes0CommentsF5 Synthesis: Hybrid SSL Offload
#SSL #webperf #infosec Now your services can take advantage of hardware acceleration even when they're deployed on virtual machines Way back in the day, when SSL offloading was young and relatively new, there were a variety of hardware, software and even architecture that arose to defeat the security penalty imposed by the requisite cryptographic functionality. Most commonly, we'd slap a PCI-card into a server, muck with the web server configuration (to load some shared objects) and voila! Instant performance boost via hardware acceleration. Later, an architectural approach that leveraged a network-based offload capability was introduced. This meant configuring an SSL offload appliance in a side (or one) arm configuration (common for caches and even load balancers back then) in which SSL traffic was routed to the offload appliance and decrypted before being sent on to the web or app server. You added some latency in the hairpin (or trombone, if you prefer) but that was always more than offset by the improvement of not letting the web server try to decrypt that data in the first place. We've come a long way since then and most often these days you'll find an application delivery controller (ADC) or an app proxy serving duty as cryptographic master of the application. Most ADCs are still far more efficient at handling SSL/TLS traffic because they've benefitted from Moore's Law in two places: the core system and the SSL acceleration hardware (which takes advantage of CPUs, too, in addition to custom hardware). Now comes the advent of the next generation of application delivery architectures which, necessarily, rely on a fabric-based approach and incorporate virtual appliances as well as traditional hardware. Services deployed on the hardware of course benefit from the availability of specialized SSL acceleration but the virtual appliances? Not so much. We (as in the corporate We) didn't like that much at all, especially given trends toward greater key lengths and the forthcoming HTTP 2.0 specification which, yes, requires SSL/TLS. That means a lot more apps are going to need SSL - but they aren't going to want the associated performance penalty that comes with it running on software. They may not be as important, but they aren't expendable. That's true whether the web server natively handles SSL or you move it off to a virtual ADC within the services fabric. All apps are important, of course, but we know that some are more important than others and thus are afforded the benefits of services deployed on faster performing hardware while others are relegated to virtual machines. We take our commitment with Synthesis to leave no application behind seriously and thus have introduced the industry's first hybrid SSL offload capability. Hybrid SSL Offload Hybrid SSL Offload was made available with the release of BIG-IP 11.6 and enables virtual editions of BIG-IP as well as less capable and legacy BIG-IP appliances and devices to harness the power of hardware to improve app performance through cryptographic acceleration. This has the added benefit of freeing up resources on virtual appliances to improve the overall performance and capacity of app services deployed on that virtual edition. In a nutshell, user requests are sent to the appropriate virtual ADC instance, which hosts all app services for an app except SSL. SSL is offloaded to a designated service running on a hardware platform that can take advantage of its targeted hardware acceleration. Using hybrid SSL offload within the Synthesis service fabric allows organizations to: •Achieve the maximum SSL performance of a virtual license •Free up Virtual Edition CPU utilization for other application services All together this means better app performance and capacity for services deployed on virtual editions. All applications need services and deserve optimal performance, even those that might otherwise by designated as "red shirt" apps by IT. F5 Synthesis continues to leave no application behind by ensuring every application has access to the services it needs, even when it means collaborating across device types.270Views0likes0CommentsFive Information Security New Year's Resolutions
Shot this late last year for Information Security Buzz. What are five information security new year's resolutions for improving cyber security in 2016 and why? ps Related: New Year's Resolutions for the Security Minded Blueprint: 2016 is the Year SDN Finds its Home, and its Name is NFV 10 Cloud Security Predictions for 2016 2016 security predictions: Partnerships, encryption and behavior tracking Technorati Tags: 2016,resolutions,security,infosec,silva,f5 Connect with Peter: Connect with F5:258Views0likes0CommentsHow To Limit URI Length Without Recompiling Apache
Use network-side scripting, of course! While just about every developer and information security professional knows that a buffer-overflow exploit can result in the execution of malicious code not many truly grok the “why”. Fortunately, it’s not really necessary for either one to be able to walk through the execution stack and trace the byte-code as it overwrites registers and then jumps to execute it. They know it’s A Very Bad Thing™ and perhaps more importantly they know how to stop it. SECONDARY and TERTIARY DEFENSE REQUIRED The best place to prevent a buffer-overflow vulnerability is in the application code. Never trust input whether from machine or man. Period. A simple check on the length of a string-based parameter can prevent vulnerabilities that may exist at the platform or operating system layer from being exploited. That’s true of almost all vulnerabilities, not just buffer overruns, by the way. An overly long input parameter could be an attempt at XSS or an SQLi, as well. Both tend to extend the “normal” data and while often obfuscated, the sheer length of such strings can indicate the presence of something malicious and almost certainly something that should be investigated. Assuming for whatever reason that this isn’t possible (and we know from research and surveys and live data from organizations like WhiteHat Security that it isn’t for a number of very valid reasons) it is likely the case that information security and operational administrators will go looking for a secondary defense. As the majority of applications today are deployed as web applications, that generally means looking to the web or application server for limitations on URL and/or parameter lengths as those are the most likely avenues of attack. One defense can be easily found if you’re deploying on Apache in the “LimitRequestLine” compilation directive. Yes, I said compilation directive. You’ll have to recompile Apache but that’s what open source is all about, right? Customization. Rapid solutions to security vulnerabilities. Agile infrastructure. While you’re in there, you might want to consider evaluating the “LimitRequestFields” and “LimitRequestFieldSize” variables, too. These two variables control the number of HTTP headers allowed as well as the length of a header field and could potentially prevent an exploit of the platform and underlying operating system coming in through the HTTP headers. Yes, such exploits do exist, and as always, better safe than sorry. While all certainly true and valid statements regarding open source software the reality is that changing the core platform code results in a long-term commitment to re-applying those changes every time the core platform is upgraded or patched. Ask an enterprise PeopleSoft developer how that has worked for them over the past decade or so – but fair warning, you’ll want to be out of spitting range when you do. Compilation has a secondary negative – it’s not agile, even though open source is. If you run into a situation in which you need to change these values you’re going to have to recompile, retest, and redeploy. And you’re going to have to regression test every application deployed on that particular modified platform. Starting to see why the benefit of customization in open source software is rarely truly taken advantage of? There’s a better way, a more agile way, a flexible way. NETWORK-SIDE SCRIPTING Whenever a solution involves the inspection and potential rejection or modification of HTTP-borne data based on, well, attributes like length, encoding, schema, etc… it should ring a bell and give pause for thought. These are variables in every sense of the word. If you decide to restrict input based on these values you necessarily open yourself up to maintaining and, in particular, updating those limitations across the life of the application in question. Thus it seems logical that the actual implementation of such limitations would leverage a location and solution that has as little impact on the rest of the infrastructure as possible. You want to maximize the coverage of the implementation solution while minimizing the disruption and impact on the infrastructure. There happens to be a strategic point of control that very much fits this description: centralization of functionality at a point of aggregation that maximizes coverage while minimizing disruption. As an added bonus the coverage is platform-agnostic, which means Apache, IIS, can be automagically covered without modifying the platforms themselves. That strategic point in the architecture is a network-side scripting enabled application delivery controller (the Load balancer for you old skool operations folks). See, when you take advantage of a full-proxy and really leverage its capabilities you can implement solutions that are by definition application-aware whilst maintaining platform-agnosticism. That’s important because the exploits you’re looking to stop are generally specific to an application; even when they target the platform they take advantage of application data manipulation and associated loopholes in processing of that data. While the target may be the platform, the miscreant takes aim and transports the payload via, well, the payload. The data. That data is, of course, often carried in the query portion of the URI as a parameter. If it’s not in the URI then it’s in the payload, often as a www-url-encoded field submitted via HTTP POST method. The script can extract the URI and validate its total length and/or it can extract each individual name-value pairs (in the URI or in the body) and evaluate each one of them for length, doing whatever it is you want done with invalid length values: reject the request, chop the value to a specific size and pass it on, log it or even route the request to an application honey-pot. Example iRule snippet to check length of URI on submission: if {[string length $uri] > 1024} /* where $uri = HTTP::uri */ If you’re thinking that it’s going to be time consuming to map all the possible variables to maximum lengths, well, you’re right. It will. You can of course write a generic across-the-board length-limiting script or you could investigate a web application firewall instead. A WAF will “learn” the mapping in real-time and allow you to fine-tune or relax limitations on a parameter by parameter basis if you wish, with a lot less time investment. Both options, however, will do the job nicely and both provide a secondary line of defense in the face of a potential exploit that is completely avoidable if you’ve got the right tools in your security toolbox. Related blogs and articles:257Views0likes0CommentsThe man in your browser
#F5SOC #infosec He shouldn't be there you know. The keys to the digital kingdom are credentials. In no industry is this more true (and ultimately more damaging) than financial services. The sophistication of the attacks used to gather those credentials and thwart the increasingly complex authentication process that guards financial transactions is sometimes staggering. That's because they require not just stealth but coordination across multiple touch points in the transaction process. MITB is not a new technique. It was first identified as a potential path to financial theft back in 2005 when Augusto Paes de Barros presented it as part of his "The future of backdoors - worst of all worlds". MITB didn't receive its "official" title until it was so named in 2007 by Philipp Gühring. In 2008, Trojans with MITB capabilities began to surface: Zeus. Torpig. Clampi. Citadel. Most financial targeting Trojan malware are able to capture a wide variety of behavior and data as well as enabling remote control via VNC and RDP. They range from keylogging and grabbing and more recently have begun taking advantage of the mobile explosion in order to bypass multi-factor authentication requirements that leverage SMS to deliver OTP or TAN to legitimate users. These Trojans accomplish these feats using MITB to inject scripts into legitimate banking web applications to extract and deliver credentials to dropzones. These scripts are dangerous not only because of the amount of data they can collect; that's true of just about any Trojan that inserts itself into the user environment. These scripts are dangerous because they become part of the application logic. MITB essentially changes the client side of a web application, giving it potentially new and dangerous capabilities such as modifying the details of transactions in real time. You might think you'd notice that, but because it's in the browser and modifying the business logic it can hide those changes from you, at least for a few days. Financial institutions attempting to put a stop to these fraudulent activities often implement two-factor authentication. Logging in with simple credentials is not enough; a second password or code is delivered via a secondary channel such as via SMS in order to authenticate the user. But even this is not always enough. As our F5 SOC research team recently showed, Neverquest is able to abuse user's trust in their banking institutions against them. In addition to the typical MITB script injection designed to steal credentials, Neverquest attempts to coerce users into also installing an application on their mobile device, designed to capture and deliver secondary authentication codes and passwords as well. Successfully doing so means attackers can execute automated transactions against the users financial accounts. Given that 73% of users are unable to distinguish between real and fake popup messages (Sharek, et al 2008, "Failure to Recognize Fake Internet Popup Warning Messages") the potential for Neverquest and similar Trojans to succeed in such efforts is more likely than not, particularly when those popups are presented by a trusted site such as a financial institution. The key to detecting these script-injecting, app modifying monsters is to understand the state of the web application page at the time it's delivered - before the Trojan has a chance to modify it - as well as monitoring for duplicate communication initiated from the web page. These are both methods used by web anti-fraud solutions to detect infected clients. A small protective script is included in each and every page that can detect attempts to modify the logic as well as notice duplicate communications and can notify the user immediately. You can learn more about Neverquest, how it works and how to mitigate it from our F5 SOC analysis of the malware.256Views0likes0CommentsF5 Friday: BIG DDoS Umbrella powered by the HP VAN SDN Controller
#SDN #DDoS #infosec Integration and collaboration is the secret sauce to breaking down barriers between security and networking Most of the focus of SDN apps has been, to date, on taking layer 4-7 services and making them into extensions of the SDN controller. But HP is taking a different approach and the results are tantalizing. HP's approach, as indicated by the recent announcement of its HP SDN App Store, focuses more on the use of SDN apps as a way to enable the sharing of data across IT silos to create a more robust architecture. These apps are capable of analyzing events and information that enable the HP VAN SDN Controller to prescriptively modify network behavior to address issues and concerns that impact networks and the applications that traverse them. One such concern is security (rarely mentioned in the context of SDN). For example, how the network might response more rapidly to threat events, such as in progress DDoS attack. Which is where the F5 BIG DDoS Umbrella for HP's VAN (Virtual Application Network) comes into play. The focus of F5 BIG DDoS Umbrella is on mitigating in-progress attacks and the implementation depends on a collaboration between two disparate devices: the HP VAN SDN Controller and F5 BIG-IP. The two devices communicate via an F5 SDN app deployed on the HP VAN SDN Controller. The controller is all about the network, while the F5 SDN app is focused on processing and acting on information obtained from F5 security services deployed on the BIG-IP. This is collaboration and integration at work, breaking down barriers between groups (security and network operations) by sharing data and automating processes*. F5 BIG DDoS Umbrella The BIG DDoS Umbrella relies upon the ability of F5 BIG-IP to intelligently intercept, inspect and identity DDoS attacks in flight. BIG-IP is able to identify DDoS events targeting the network, application layers, DNS or SSL. Configuration (available as an iApp upon request) is flexible, enabling the trigger to be one, a combination of or all of the events. This is where collaboration between security and network operations is critical to ensure the response to a DDoS event meets defined business and operational goals. When BIG-IP identifies a threat, it sends the relevant information with a prescribed action to the HP VAN SDN Controller. The BIG DDoS Umbrella agent (the SDN "app") on the HP VAN SDN Controller processes the information, and once the location of entry for the attacker is isolated, the prescribed action is implemented on the device closest to the attacker. The BIG DDoS Umbrella App is free, and designed to extend the existing DDoS protection capabilities of BIG-IP to the edge of the network. It is a community framework which users may use, enhance or improve. Additional Resources: DDoS Umbrella for HP SDN AppStore - Configuration Guide HP SDN App Store - F5 BIG DDoS Umbrella App Community * If that sounds more like DevOps than SDN, you're right. It's kind of both, isn't it? Interesting, that...235Views0likes0Comments