app sec
42 TopicsEnhance your Application Security using Client-side signals – Part 2
Elevate your web and mobile application security with F5's innovative integration of web proxy strategies and client-side signals in this two-part series. Dive into the three critical categories of client-side signals—human interaction, device environment, and network signals—to enhance your application security strategy. Gain practical insights on distinguishing between humans and bots, fingerprinting devices, and analyzing network signals, ensuring robust protection for your online applications.162Views1like0CommentsThe Challenges of SQL Load Balancing
#infosec #iam load balancing databases is fraught with many operational and business challenges. While cloud computing has brought to the forefront of our attention the ability to scale through duplication, i.e. horizontal scaling or “scale out” strategies, this strategy tends to run into challenges the deeper into the application architecture you go. Working well at the web and application tiers, a duplicative strategy tends to fall on its face when applied to the database tier. Concerns over consistency abound, with many simply choosing to throw out the concept of consistency and adopting instead an “eventually consistent” stance in which it is assumed that data in a distributed database system will eventually become consistent and cause minimal disruption to application and business processes. Some argue that eventual consistency is not “good enough” and cite additional concerns with respect to the failure of such strategies to adequately address failures. Thus there are a number of vendors, open source groups, and pundits who spend time attempting to address both components. The result is database load balancing solutions. For the most part such solutions are effective. They leverage master-slave deployments – typically used to address failure and which can automatically replicate data between instances (with varying levels of success when distributed across the Internet) – and attempt to intelligently distribute SQL-bound queries across two or more database systems. The most successful of these architectures is the read-write separation strategy, in which all SQL transactions deemed “read-only” are routed to one database while all “write” focused transactions are distributed to another. Such foundational separation allows for higher-layer architectures to be implemented, such as geographic based read distribution, in which read-only transactions are further distributed by geographically dispersed database instances, all of which act ultimately as “slaves” to the single, master database which processes all write-focused transactions. This results in an eventually consistent architecture, but one which manages to mitigate the disruptive aspects of eventually consistent architectures by ensuring the most important transactions – write operations – are, in fact, consistent. Even so, there are issues, particularly with respect to security. MEDIATION inside the APPLICATION TIERS Generally speaking mediating solutions are a good thing – when they’re external to the application infrastructure itself, i.e. the traditional three tiers of an application. The problem with mediation inside the application tiers, particularly at the data layer, is the same for infrastructure as it is for software solutions: credential management. See, databases maintain their own set of users, roles, and permissions. Even as applications have been able to move toward a more shared set of identity stores, databases have not. This is in part due to the nature of data security and the need for granular permission structures down to the cell, in some cases, and including transactional security that allows some to update, delete, or insert while others may be granted a different subset of permissions. But more difficult to overcome is the tight-coupling of identity to connection for databases. With web protocols like HTTP, identity is carried along at the protocol level. This means it can be transient across connections because it is often stuffed into an HTTP header via a cookie or stored server-side in a session – again, not tied to connection but to identifying information. At the database layer, identity is tightly-coupled to the connection. The connection itself carries along the credentials with which it was opened. This gives rise to problems for mediating solutions. Not just load balancers but software solutions such as ESB (enterprise service bus) and EII (enterprise information integration) styled solutions. Any device or software which attempts to aggregate database access for any purpose eventually runs into the same problem: credential management. This is particularly challenging for load balancing when applied to databases. LOAD BALANCING SQL To understand the challenges with load balancing SQL you need to remember that there are essentially two models of load balancing: transport and application layer. At the transport layer, i.e. TCP, connections are only temporarily managed by the load balancing device. The initial connection is “caught” by the Load balancer and a decision is made based on transport layer variables where it should be directed. Thereafter, for the most part, there is no interaction at the load balancer with the connection, other than to forward it on to the previously selected node. At the application layer the load balancing device terminates the connection and interacts with every exchange. This affords the load balancing device the opportunity to inspect the actual data or application layer protocol metadata in order to determine where the request should be sent. Load balancing SQL at the transport layer is less problematic than at the application layer, yet it is at the application layer that the most value is derived from database load balancing implementations. That’s because it is at the application layer where distribution based on “read” or “write” operations can be made. But to accomplish this requires that the SQL be inline, that is that the SQL being executed is actually included in the code and then executed via a connection to the database. If your application uses stored procedures, then this method will not work for you. It is important to note that many packaged enterprise applications rely upon stored procedures, and are thus not able to leverage load balancing as a scaling option. Depending on your app or how your organization has agreed to protect your data will determine which of these methods are used to access your databases. The use of inline SQL affords the developer greater freedom at the cost of security, increased programming(to prevent the inherent security risks), difficulty in optimizing data and indices to adapt to changes in volume of data, and deployment burdens. However there is lively debate on the values of both access methods and how to overcome the inherent risks. The OWASP group has identified the injection attacks as the easiest exploitation with the most damaging impact. This also requires that the load balancing service parse MySQL or T-SQL (the Microsoft Transact Structured Query Language). Databases, of course, are designed to parse these string-based commands and are optimized to do so. Load balancing services are generally not designed to parse these languages and depending on the implementation of their underlying parsing capabilities, may actually incur significant performance penalties to do so. Regardless of those issues, still there are an increasing number of organizations who view SQL load balancing as a means to achieve a more scalable data tier. Which brings us back to the challenge of managing credentials. MANAGING CREDENTIALS Many solutions attempt to address the issue of credential management by simply duplicating credentials locally; that is, they create a local identity store that can be used to authenticate requests against the database. Ostensibly the credentials match those in the database (or identity store used by the database such as can be configured for MSSQL) and are kept in sync. This obviously poses an operational challenge similar to that of any distributed system: synchronization and replication. Such processes are not easily (if at all) automated, and rarely is the same level of security and permissions available on the local identity store as are available in the database. What you generally end up with is a very loose “allow/deny” set of permissions on the load balancing device that actually open the door for exploitation as well as caching of credentials that can lead to unauthorized access to the data source. This also leads to potential security risks from attempting to apply some of the same optimization techniques to SQL connections as is offered by application delivery solutions for TCP connections. For example, TCP multiplexing (sharing connections) is a common means of reusing web and application server connections to reduce latency (by eliminating the overhead associated with opening and closing TCP connections). Similar techniques at the database layer have been used by application servers for many years; connection pooling is not uncommon and is essentially duplicated at the application delivery tier through features like SQL multiplexing. Both connection pooling and SQL multiplexing incur security risks, as shared connections require shared credentials. So either every access to the database uses the same credentials (a significant negative when considering the loss of an audit trail) or we return to managing duplicate sets of credentials – one set at the application delivery tier and another at the database, which as noted earlier incurs additional management and security risks. YOU CAN’T WIN FOR LOSING Ultimately the decision to load balance SQL must be a combination of business and operational requirements. Many organizations successfully leverage load balancing of SQL as a means to achieve very high scale. Generally speaking the resulting solutions – such as those often touted by e-Bay - are based on sound architectural principles such as sharding and are designed as a strategic solution, not a tactical response to operational failures and they rarely involve inspection of inline SQL commands. Rather they are based on the ability to discern which database should be accessed given the function being invoked or type of data being accessed and then use a traditional database connection to connect to the appropriate database. This does not preclude the use of application delivery solutions as part of such an architecture, but rather indicates a need to collaborate across the various application delivery and infrastructure tiers to determine a strategy most likely to maintain high-availability, scalability, and security across the entire architecture. Load balancing SQL can be an effective means of addressing database scalability, but it should be approached with an eye toward its potential impact on security and operational management. What are the pros and cons to keeping SQL in Stored Procs versus Code Mission Impossible: Stateful Cloud Failover Infrastructure Scalability Pattern: Sharding Streams The Real News is Not that Facebook Serves Up 1 Trillion Pages a Month… SQL injection – past, present and future True DDoS Stories: SSL Connection Flood Why Layer 7 Load Balancing Doesn’t Suck Web App Performance: Think 1990s.2.3KViews0likes1CommentSnippet #7: OWASP Useful HTTP Headers
If you develop and deploy web applications then security is on your mind. When I want to understand a web security topic I go to OWASP.org, a community dedicated to enabling the world to create trustworthy web applications. One of my favorite OWASP wiki pages is the list of useful HTTP headers. This page lists a few HTTP headers which, when added to the HTTP responses of an app, enhances its security practically for free. Let’s examine the list… These headers can be added without concern that they affect application behavior: X-XSS-Protection Forces the enabling of cross-site scripting protection in the browser (useful when the protection may have been disabled) X-Content-Type-Options Prevents browsers from treating a response differently than the Content-Type header indicates These headers may need some consideration before implementing: Public-Key-Pins Helps avoid *-in-the-middle attacks using forged certificates Strict-Transport-Security Enforces the used of HTTPS in your application, covered in some depth by Andrew Jenkins X-Frame-Options / Frame-Options Used to avoid "clickjacking", but can break an application; usually you want this Content-Security-Policy / X-Content-Security-Policy / X-Webkit-CSP Provides a policy for how the browser renders an app, aimed at avoiding XSS Content-Security-Policy-Report-Only Similar to CSP above, but only reports, no enforcement Here is a script that incorporates three of the above headers, which are generally safe to add to any application: And that's it: About 20 lines of code to add 100 more bytes to the total HTTP response, and enhanced enhanced application security! Go get your own FREE license and try it today!735Views0likes2CommentsComplying with PCI DSS–Part 3: Maintain a Vulnerability Management Program
According to the PCI SSC, there are 12 PCI DSS requirements that satisfy a variety of security goals. Areas of focus include building and maintaining a secure network, protecting stored cardholder data, maintaining a vulnerability management program, implementing strong access control measures, regularly monitoring and testing networks, and maintaining information security policies. The essential framework of the PCI DSS encompasses assessment, remediation, and reporting. We’re exploring how F5 can help organizations gain or maintain compliance and today is Maintain a Vulnerability Management Program which includes PCI Requirements 5 and 6. To read Part 1, click: Complying with PCI DSS–Part 1: Build and Maintain a Secure Network and Part 2: Complying with PCI DSS–Part 2: Protect Cardholder Data Requirement 5: Use and regularly update antivirus software or programs. PCI DSS Quick Reference Guide description: Vulnerability management is the process of systematically and continuously finding weaknesses in an entity’s payment card infrastructure system. This includes security procedures, system design, implementation, or internal controls that could be exploited to violate system security policy. Solution: With BIG-IP APM and BIG-IP Edge Gateway, F5 provides the ability to scan any remote device or internal system to ensure that an updated antivirus package is running prior to permitting a connection to the network. Once connections are made, BIG-IP APM and BIG-IP Edge Gateway continually monitor the user connections for a vulnerable state change, and if one is detected, can quarantine the user on the fly into a safe, secure, and isolated network. Remediation services can include a URL redirect to an antivirus update server. For application servers in the data center, BIG-IP products can communicate with existing network security and monitoring tools. If an application server is found to be vulnerable or compromised, that device can be automatically quarantined or removed from the service pool. With BIG-IP ASM, file uploads can be extracted from requests and transferred over iCAP to a central antivirus (AV) scanner. If a file infection is detected, BIG-IP ASM will drop that request, making sure the file doesn’t reach the web server. Requirement 6: Develop and maintain secure systems and applications. PCI DSS Quick Reference Guide description: Security vulnerabilities in systems and applications may allow criminals to access PAN and other cardholder data. Many of these vulnerabilities are eliminated by installing vendor-provided security patches, which perform a quick-repair job for a specific piece of programming code. All critical systems must have the most recently released software patches to prevent exploitation. Entities should apply patches to less-critical systems as soon as possible, based on a risk-based vulnerability management program. Secure coding practices for developing applications, change control procedures, and other secure software development practices should always be followed. Solution: Requirements 6.1 through 6.5 deal with secure coding and application development; risk analysis, assessment, and mitigation; patching; and change control. Requirement 6.6 states: “Ensure all public-facing web applications are protected against known attacks, either by performing code vulnerability reviews at least annually or by installing a web application firewall in front of public-facing web applications.” This requirement can be easily met with BIG-IP ASM, which is a leading web application firewall (WAF) offering protection for vulnerable web applications. Using both a positive security model for dynamic application protection and a strong, signature-based negative security model, BIG-IP ASM provides application-layer protection against both targeted and generalized application attacks. It also protects against the Open Web Application Security Project (OWASP) Top Ten vulnerabilities and threats on the Web Application Security Consortium’s (WASC) Threat Classification lists. To assess a web application’s vulnerability, most organizations turn to a vulnerability scanner. The scanning schedule might depend on a change in control, as when an application is initially being deployed, or other triggers such as a quarterly report. The vulnerability scanner scours the web application, and in some cases actually attempts potential attacks, to generate a report indicating all possible vulnerabilities. This gives the administrator managing the web security devices a clear view of all exposed areas and potential threats to the website. Such a report is a moment-in time assessment and might not result in full application coverage, but should give administrators a clear picture of their web application security posture. It includes information about coding errors, weak authentication mechanisms, fields or parameters that query the database directly, or other vulnerabilities that provide unauthorized access to information, sensitive or not. Otherwise, many of these vulnerabilities would need to be manually re-coded or manually added to the WAF policy—both expensive undertakings. Simply having the vulnerability report, while beneficial, doesn’t make a web application secure. The real value of the report lies in how it enables an organization to determine the risk level and how best to mitigate the risk. Since recoding an application is expensive and time-consuming and may generate even more errors, many organizations deploy a WAF like BIG-IP ASM. A WAF enables an organization to protect its web applications by virtually patching the open vulnerabilities until developers have an opportunity to properly close the hole. Often, organizations use the vulnerability scanner report to either tighten or initially generate a WAF policy. While finding vulnerabilities helps organizations understand their exposure, they must also have the ability to quickly mitigate those vulnerabilities to greatly reduce the risk of application exploits. The longer an application remains vulnerable, the more likely it is to be compromised. For cloud deployments, BIG-IP ASM Virtual Edition (VE) delivers the same functionality as the physical edition and helps companies maintain compliance, including compliance with PCI DSS, when they deploy applications in the cloud. If an application vulnerability is discovered, BIG-IP ASM VE can quickly be deployed in a cloud environment, enabling organizations to immediately patch vulnerabilities virtually until the development team can permanently fix the application. Additionally, organizations are often unable to fix applications developed by third parties, and this lack of control prevents many of them from considering cloud deployments. But with BIG-IP ASM VE, organizations have full control over securing their cloud infrastructure. BIG-IP ASM version 11.1 includes integration with IBM Rational AppScan, Cenzic Hailstorm, QualysGuard WAS, and WhiteHat Sentinel, making BIG-IP ASM the most advanced vulnerability assessment and application protection on the market. In addition, administrators can better create and enforce policies with information about attack patterns from a grouping of violations or otherwise correlated incidents. In this way, BIG-IP ASM protects the applications between scanning and patching cycles and against zero-day attacks that signature-based scanners won’t find. Both are critical in creating a secure Application Delivery Network. BIG-IP ASM also makes it easy to understand where organizations stand relative to PCI DSS compliance. With the BIG-IP ASM PCI Compliance Report, organizations can quickly see each security measure required to comply with PCI DSS 2.0 and understand which measures are or are not relevant to BIG-IP ASM functions. For relevant security measures, the report indicates whether the organization’s BIG-IP ASM appliance complies with PCI DSS 2.0. For security measures that are not relevant to BIG-IP ASM, the report explains what action to take to achieve PCI DSS 2.0 compliance. BIG-IP ASM PCI Compliance Report Finally, with the unique F5 iHealth system, organizations can analyze the configuration of their BIG-IP products to identify any critical patches or security updates that may be necessary. Next: Implement Strong Access Control Measures ps449Views0likes1CommentF5 Friday: Mitigating the THC SSL DoS Threat
The THC #SSL #DoS tool exploits the rapid resource consumption nature of the handshake required to establish a secure session using SSL. A new attack tool was announced this week and continues to follow in the footsteps of resource exhaustion as a means to achieve a DoS against target sites. Recent trends in attacks show an increasing interest in maximizing effect while minimizing effort. This means a move away from traditional denial of service attacks that focus on overwhelming sites with traffic and toward attacks that focus on rapidly consuming resources, instead. Both have the same ultimate goal: overwhelming infrastructure, whether server or router or insert infrastructure component of choice>. The latest SSL-based attack falls into the modern category of denial of service attacks in that it’s not an attempt to overwhelm with traffic, but rather to consume resources on servers such that capacity and the ability to respond to legitimate requests is eliminated. The blog post announcing the exploit tools explains: Establishing a secure SSL connection requires 15x more processing power on the server than on the client. THC-SSL-DOS exploits this asymmetric property by overloading the server and knocking it off the Internet. This problem affects all SSL implementations today. The vendors are aware of this problem since 2003 and the topic has been widely discussed. This attack further exploits the SSL secure Renegotiation feature to trigger thousands of renegotiations via single TCP connection. -- THC SSL DOS Tool Released As the blog points out, there is no resolution to this exploit. Common mitigation techniques include the use of an SSL accelerator, i.e. a reverse-proxy capable device with specialized hardware designed to improve the processing capability of SSL and associated cryptographic functions. Modern application delivery controllers like BIG-IP include such hardware by default and make use of its performance and capacity-enhancing abilities to offset the operational costs of supporting SSL-secured communication. BIG-IP MITIGATION There are actually several ways in which BIG-IP can mitigate the potential impact of this kind of attack. First and foremost is simply its higher capacity for connections and processing of SSL / RSA operations. BIG-IP can manage myriad more connections – secure or not – than a typical web server and thus it may be, depending on the hardware platform on which BIG-IP is deployed, that the mitigation rests merely on having a BIG-IP in the path of the attack. In the case that it is not, or if organizations desire a more proactive approach to mitigation, there are two additional options: 1. SSL renegotiation, which is in part the basis for the attack (it’s what allows a relatively few clients to force the server to consume more and more resources), can be disabled in BIG-IP v11 and v10.2.3. This may break some applications and/or clients so this option may want to be left as a “last resort” or the risks carefully weighed before deploying such a configuration. 2. An iRule that drops connections over which a client attempts to renegotiate more than five times in a given 60-second interval can be deployed. As noted by David Holmes and the iRule author, Jason Rahm, “By silently dropping the client connection, the iRule causes the attack tool to stall for long periods of time, fully negating the attack. There should be no false-positives dropped, either, as there are very few valid use cases for renegotiating more than once a minute.” The full details and code for the iRule can be found in the DevCentral article “SSL Renegotiation DOS attack – an iRule Countermeasure” UPDATE 11/1/2011: David Holmes has included an optimized version of the iRule in his latest blog, "The SSL Renegotation Attack is Back." His version uses the normal flow key (instead of a random key), adds a log message, and optimizes memory consumption. Regardless of the mitigating technique used, BIG-IP can provide the operational security necessary to prevent such consumption-leeching attacks from negatively impacting applications by defeating the attack before it reaches application infrastructure. Stay safe!474Views0likes1CommentBIG-IP Edge Client 2.0.2 for Android
Earlier this week F5 released our BIG-IP Edge Client for Android with support for the new Amazon Kindle Fire HD. You can grab it off Amazon instantly for your Android device. By supporting BIG-IP Edge Client on Kindle Fire products, F5 is helping businesses secure personal devices connecting to the corporate network, and helping end users be more productive so it’s perfect for BYOD deployments. The BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) or later devices secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP® Access Policy Manager™, Edge Gateway™, or FirePass™ SSL-VPN solutions. BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) Devices Features: Provides accelerated mobile access when used with F5 BIG-IP® Edge Gateway Automatically roams between networks to stay connected on the go Full Layer 3 network access to all your enterprise applications and files Supports multi-factor authentication with client certificate You can use a custom URL scheme to create Edge Client configurations, start and stop Edge Client BEFORE YOU DOWNLOAD OR USE THIS APPLICATION YOU MUST AGREE TO THE EULA HERE: http://www.f5.com/apps/android-help-portal/eula.html BEFORE YOU CONTACT F5 SUPPORT, PLEASE SEE: http://support.f5.com/kb/en-us/solutions/public/2000/600/sol2633.html If you have an iOS device, you can get the F5 BIG-IP Edge Client for Apple iOS which supports the iPhone, iPad and iPod Touch. We are also working on a Windows 8 client which will be ready for the Win8 general availability. ps Resources F5 BIG-IP Edge Client Samsung F5 BIG-IP Edge Client Rooted F5 BIG-IP Edge Client F5 BIG-IP Edge Portal for Apple iOS F5 BIG-IP Edge Client for Apple iOS F5 BIG-IP Edge apps for Android Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications iDo Declare: iPhone with BIG-IP Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education,technology, application delivery, ipad, cloud, context-aware,infrastructure 2.0, iPhone, web, internet, security,hardware, audio, whitepaper, apple, iTunes2.6KViews0likes3CommentsIn 5 Minutes or Less: BIG-IP ASM & Cenzic Scanner
I show you in this special extended edition of In 5 Minutes or Less, how BIG-IP ASM is integrated with Cenzic Hailstorm Scanner for complete website protection. From vulnerability checking to detection to remediation, With a few clicks, you can instantly patch vulnerabilities. ps Resources: BIG-IP Application Security Manager F5 and Cenzic partnership In 5 Minutes or Less Series (22 videos – over 2 hours of In 5 Fun) F5 YouTube Channel202Views0likes0CommentsBYOD and the Death of the DMZ
#BYOD #infosec It's context that counts, not corporate connections. BYOD remains a topic of interest as organizations grapple not only technologically with the trend but politically, as well. There are dire warnings that refusing to support BYOD will result in an inability to attract and retain up and coming technologists, that ignoring the problems associated with BYOD will eventually result in some sort of karmic IT event that will be painful for all involved. Surveys continue to tell us organizations cannot ignore BYOD. A recent ITIC survey indicated a high level of BYOD across the global 550 companies polled. 51% of workers utilize smart phones as their BYOD devices; another 44% use notebooks and ultra books, while 31% of respondents indicated they use tablets (most notably the Apple iPad) and 23% use home-based desktop PCs or Macs. It's here, it's now, and it's in the data center. The question is no longer "will you allow it" but "how will you secure/manage/support it"? It's that first piece – secure it – that's causing some chaos and confusion. Just as we discovered with cloud computing early on, responsibility for anything shared is muddled. When asked who should bear responsibility for the security of devices in BYOD situations, respondents offered a nearly equal split between company (37%) and end-user (39%) with 21% stating it fell equally on both. From an IT security perspective, this is not a bad split. Employees should be active participants in organizational security. Knowing is, as GI Joe says, half the battle and if employees bringing their own devices to work are informed and understand the risks, they can actively participate in improving security practices and processes. But relying on end-users for organizational security would be folly, and thus IT must take responsibility for the technological enforcement of security policies developed in conjunction with the business. One of the first and most important things we must do to enable better security in a BYOD (and cloudy) world is to kill the DMZ. [Pause for apoplectic fits] By kill the DMZ I don't mean physically dismantle the underlying network architecture supporting it – I mean logically. The DMZ was developed as a barrier between the scary and dangerous Internet and sensitive corporate data and applications. That barrier now must extend to inside the data center, to the LAN, where the assumption has long been devices and users accessing data center resources are inherently safe. They are not (probably never have been, really). Every connection, every request, every attempt to access an application or data within the data center must be treated as suspect, regardless of where it may have originated and without automatically giving certain devices privileges over others. A laptop on the LAN may or may not be BYOD, it may or may not be secure, it may or may not be infected. A laptop on the LAN is no more innately safe than a tablet than is a smart phone. SMARTER CONTROL This is where the concept of a strategic point of control comes in handy. If every end-user is funneled through the same logical tier in the data center regardless of network origination, policies can be centrally deployed and enforced to ensure appropriate levels of access based on the security profile of the device and user. By sharing access control across all devices, regardless of who purchased and manages them, policies can be tailored to focus on the application and the data, not solely on the point of origination. While policies may trigger specific rules or inspections based on device or originating location, ultimately the question is who can access a given application and data and under what circumstances? It's context that counts, not corporate connections. The questions must be asked, regardless of whether the attempt to access begins within the corporate network boundaries or not. Traffic coming from the local LAN should not be treated any differently than that of traffic entering via the WAN. The notion of "trusted" and "untrusted" network connectivity has simply been obviated by the elimination of wires and the rampant proliferation of malware and other destructive digital infections. In essence, the DMZ is being – and must be - transformed. It's no longer a zone of inherent distrust between the corporate network and the Internet, it's a zone of inherent distrust between corporate resources and everything else. Its design and deployment as a buffer is still relevant, but only in the sense that it stands between critical assets and access by hook, crook, or tablet. The DMZ as we have known it is dead. Trust no one. Referenced blogs and articles: If Security in the Cloud Were Handled Like Car Accidents277Views0likes0CommentsSmartTV, Smartphones and Fill-in-the-Blank Employees
Right off the bat, I know the title sounds like it’s all connected but they are only slightly related so I’ll give you the option of dropping out now. Still here? Cool. I’ve been traveling over the last couple weeks and stories catch my eye along the way that I probably would’ve written about but didn’t. Until now. Besides it’s always fun to roll up a few stories in one to get back on track. TV’s are becoming cutting edge multimedia devices that reside on your living room wall. You can stream movies, browse the web, check weather, plug in USBs for slideshows/video, play games, home network along with simply catching the latest episode of your favorite program. This article from usatoday.com talks about many of the internet enabled TVs and their capabilities. For instance, some TVs are now including dual-core processors to make web browsing more enjoyable since many TVs don’t have the processing power to load web pages quickly, or at least what we’re used to on our computers. Also coming out are TVs with screen resolutions four times greater than full HD screens – these are the 4K sets. These new 4K sets apparently has dampened any lingering 3D enthusiasm, which seems waning anyway. In addition to TVs, other appliances are getting smart, so they say. There are new refrigerators, air conditioners, washers, and dryers which are all app-controlled. Users can turn them on and off from anywhere. I know there are mobile ‘apps’ but it would be a easy transition to start calling our appliances, apps also. Close enough. How’s the clothes cleaning app working? Is the food cooling app running? I’ve mentioned many times that while all this is very cool stuff, we still need to remember that these devices are connected to the internet and subject to the same threats as all our other connected devices. It’s only a matter of time when a hacker takes down all the ‘smart’ refrigerators on the East Coast. I also think that TVs, cars and any other connected device could be considered BYOD in the near future. Why wouldn’t a mobile employee want secure VDI access from his car’s Ent/GPS display? Why couldn’t someone check their corporate email from the TV during commercials? Smartphones, as most of you are aware, are changing our lives. Duh. There is an interesting series over on cnn.com called, "Our Mobile Society," about how smartphones and tablets have changed the way we live. The first two articles, How smartphones make us superhuman and On second thought: Maybe smartphones make us 'SuperStupid'? cover both sides of the societal dilemma. In 2011, there were 6 Billion mobile phone subscriptions worldwide servicing the 7 Billion people who live on this planet, according to the International Telecommunication Union. These connected devices have made trivia, trivial and we can keep in constant contact with everyone along with people driving, texting and generally not paying attention to anything around them while interacting with their appendage. Pew also released a survey indicating that 54% of cell phone consumers who use mobile apps have decided not to install an app after learning how much personal information they'd have to share; and 30% of that group has uninstalled an app for privacy reasons. We are so concerned about our privacy that we’re now dumping apps that ask for too much info. I know there is a ‘We all have one & use it everyday day but don’t look, ok’ joke somewhere in there. To Educate or Not Educate. I have no idea why I only saw this recently but back in July, there was a lively discussion about whether security awareness training for employees was money well spent. I’ve often written about the importance of ongoing training. In Why you shouldn't train employees for security awareness, Dave Aitel argues that even with all that training, employees still click malicious links anyway. Instead of wasting money on employee training, organizations should bolster up their system’s defenses to protect employees from themselves. Boris Sverdlik of Jaded Security posted a rebuttal saying that employees are and should be accountable for what happens in the environment and no amount of controls can protect against people spilling secrets during a social engineering probe. In a rebuttal to both, Iftach Ian Amit, from Security Art says they are both right and wrong at the same time. He states, ‘Trying to solve infosec issues through technological means is a guaranteed recipe for failure. No one, no technology, or software can account for every threat scenario possible, and this is exactly why we layer our defenses. And layering shouldn’t just be done from a network or software perspective – security layers also include access control, monitoring, tracking, analysis, and yes – human awareness. Without the human factor you are doomed.’ His position is that when it comes to ‘Information Security,’ we focus too much on the ‘information’ part and less on the holistic meaning of ‘security.’ His suggestion is to look at your organization as an attacker would and invest in areas that are vulnerable. That’s your basic risk analysis and risk mitigation. We are in a fun time for technology, enjoy and use wisely. ps References Smart TVs offer web browsing, instant video streaming Poll: Cellphone users dump apps to save privacy, lose their phones anyway Forget 3D. Your dream TV should be 4K How smartphones make us superhuman On second thought: Maybe smartphones make us 'SuperStupid'? Technology Can Only Do So Much Unplug Everything! Why you shouldn't train employees for security awareness You Shouldn’t train employees for Security Awareness – REBUTTAL Why Training Users in Enterprise Security May Not Be Effective Security Awareness and Security Context – Aitel and Krypt3ia are both wrong?190Views0likes0CommentsPersistent Threat Management
#dast #infosec #devops A new operational model for security operations can dramatically reduce risk Examples of devops focuses a lot on provisioning and deployment configuration. Rarely mentioned is security, even though there is likely no better example of why devops is something you should be doing. That’s because aside from challenges rising from the virtual machine explosion inside the data center, there’s no other issue that better exemplifies the inability of operations to scale manually to meet demand than web application security. Attacks today are persistent and scalable thanks to rise of botnets, push-button mass attacks, and automation. Security operations, however, continues to be hampered by manual response processes that simply do not scale fast enough to deal with these persistent threats. Tools that promise to close the operational gap between discovery and mitigation for the most part continue to rely upon manual configuration and deployment. Because of the time investment required, organizations focus on securing only the most critical of web applications, leaving others vulnerable and open to exploitation. Two separate solutions – DAST and virtual patching – come together to offer a path to meeting this challenge head on, where it lives, in security operations. Through integration and codification of vetted mitigations, persistent threat management enables the operationalization of security operations. A New Operational Model DAST, according to Gartner, “locates vulnerabilities at the application layer to quickly and accurately give security team’s insight into what vulnerabilities need to be fixed and where to find them.” Well known DAST providers like WhiteHat Security and Cenzic have long expounded upon scanning early and often and on the need to address the tendency of organizations to leave applications vulnerable despite the existence of well-known mitigating solutions – both from developers and infrastructure. Virtual patching is the process of employing a WAF-based mitigation to virtually “patch” a security vulnerability in a web application. Virtual patching takes far less time and effort than application modification, and is thus often used as a temporary mitigation that enables developers or vendors time to address the vulnerability but reduces the risk of exploitation sooner rather than later. Virtual patching has generally been accomplished through the integration of DAST and WAF solutions. Push a button here, another one there, and voila! Application is patched. But this process is still highly manual and has required human intervention to validate the mitigation as well as deploy it. This process does not scale well when an organization with hundreds of applications may be facing 7-12 vulnerabilities per application. Adoption of agile development methodologies have made this process more cumbersome, as releases are pushed to production more frequently, requiring scanning and patching again and again and again. The answer is to automate the discovery and mitigation process for the 80% of vulnerabilities for which there are known, vetted mitigating policies. This relieves the pressure on security ops and allows them to effectively scale to cover all web applications rather than just those deemed critical by the business. AGILE meets OPS This operational model exemplifies the notion of applying agile methodologies to operations, a.k.a. devops. Continuous iterations of a well-defined process ensure better, more secure applications and free security ops to focus on the 20% of threats that cannot be addressed automatically. This enables operations to scale and provide the same (or better) level of service, something that’s increasingly difficult as the number of applications and clients that must be supported explodes. A growing reliance on virtualization to support cloud computing as well as the proliferation of devices may make for more interesting headlines, but security processes will also benefit from operations adopting devops. An increasingly robust ecosystem of integrated solutions that enable a more agile approach to security by taking advantage of automation and best practices will be a boon to organizations struggling to keep up with the frenetic pace set by attackers. Drama in the Cloud: Coming to a Security Theatre Near You Cloud Security: It’s All About (Extreme Elastic) Control Devops is a Verb Ecosystems are Always in Flux This is Why We Can’t Have Nice Things Application Security is a Stack Devops is Not All About Automation212Views0likes0Comments