policy enforcement
24 TopicsIPS or WAF Dilemma
As they endeavor to secure their systems from malicious intrusion attempts, many companies face the same decision: whether to use a web application firewall (WAF) or an intrusion detection or prevention system (IDS/IPS). But this notion that only one or the other is the solution is faulty. Attacks occur at different layers of the OSI model and they often penetrate multiple layers of either the stack or the actual system infrastructure. Attacks are also evolving—what once was only a network layer attack has shifted into a multi-layer network and application attack. For example, malicious intruders may start with a network-based attack, like denial of service (DoS), and once that takes hold, quickly launch another wave of attacks targeted at layer 7 (the application). Ultimately, this should not be an either/or discussion. Sound security means not only providing the best security at one layer, but at all layers. Otherwise organizations have a closed gate with no fence around it. Often, IDS and IPS devices are deployed as perimeter defense mechanisms, with an IPS placed in line to monitor network traffic as packets pass through. The IPS tries to match data in the packets to data in a signature database, and it may look for anomalies in the traffic. IPSs can also take action based on what it has detected, for instance by blocking or stopping the traffic. IPSs are designed to block the types of traffic that they identify as threatening, but they do not understand web application protocol logic and cannot decipher if a web application request is normal or malicious. So if the IPS does not have a signature for a new attack type, it could let that attack through without detection or prevention. With millions of websites and innumerable exploitable vulnerabilities available to attackers, IPSs fail when web application protection is required. They may identify false positives, which can delay response to actual attacks. And actual attacks might also be accepted as normal traffic if they happen frequently enough since an analyst may not be able to review every anomaly. WAFs have greatly matured since the early days. They can create a highly customized security policy for a specific web application. WAFs can not only reference signature databases, but use rules that describe what good traffic should look like with generic attack signatures to give web application firewalls the strongest mitigation possible. WAFs are designed to protect web applications and block the majority of the most common and dangerous web application attacks. They are deployed inline as a proxy, bridge, or a mirror port out of band and can even be deployed on the web server itself, where they can audit traffic to and from the web servers and applications, and analyze web application logic. They can also manipulate responses and requests and hide the TCP stack of the web server. Instead of matching traffic against a signature or anomaly file, they watch the behavior of the web requests and responses. IPSs and WAFs are similar in that they analyze traffic; but WAFs can protect against web-based threats like SQL injections, session hijacking, XSS, parameter tampering, and other threats identified in the OWASP Top 10. Some WAFs may contain signatures to block well-known attacks, but they also understand the web application logic. In addition to protecting the web application from known attacks, WAFs can also detect and potentially prevent unknown attacks. For instance, a WAF may observe an unusually large amount of traffic coming from the web application. The WAF can flag it as unusual or unexpected traffic, and can block that data. A signature-based IPS has very little understanding of the underlying application. It cannot protect URLs or parameters. It does not know if an attacker is web-scraping, and it cannot mask sensitive information like credit cards and Social Security numbers. It could protect against specific SQL injections, but it would have to match the signatures perfectly to trigger a response, and it does not normalize or decode obfuscated traffic. One advantage of IPSs is that they can protect the most commonly used Internet protocols, such as DNS, SMTP, SSH, Telnet, and FTP. The best security implementation will likely involve both an IPS and a WAF, but organizations should also consider which attack vectors are getting traction in the malicious hacking community. An IDS or IPS has only one solution to those problems: signatures. Signatures alone can’t protect against zero-day attacks for example; proactive URLs, parameters, allowed methods, and deep application knowledge are essential to this task. And if a zero-day attack does occur, an IPS’s signatures can’t offer any protection. However if a zero-day attack occurs that a WAF doesn’t detect, it can still be virtually patched using F5’s iRules until a there’s a permanent fix. A security conversation should be about how to provide the best layered defense. Web application firewalls like BIG-IP ASM protects traffic at multiple levels, using several techniques and mechanisms. IPS just reads the stream of data, hoping that traffic matches its one technique: signatures. Web application firewalls are unique in that they can detect and prevent attacks against a web application. They provide an in-depth inspection of web traffic and can protect against many of the same vulnerabilities that IPSs look for. They are not designed, however, to purely inspect network traffic like an IPS. If an organization already has an IPS as part of the infrastructure, the ideal secure infrastructure would include a WAF to enhance the capabilities offered with an IPS. This is a best practice of layered defenses. The WAF provides yet another layer of protection within an organization’s infrastructure and can protect against many attacks that would sail through an IPS. If an organization has neither, the WAF would provide the best application protection overall. ps Related: 3 reasons you need a WAF even if your code is (you think) secure Web App Attacks Rise, Disclosed Bugs Decline Next-Gen Firewalls Make Old Arguments New Again Why Developers Should Demand Web App Firewalls. Too Dangerous to Enter? Asian IT security study finds enterprises revising strategy to accommodate new IT trends Protecting the navigation layer from cyber attacks OWASP Top Ten Project F5 Case Study: WhiteHat Security Technorati Tags: F5, PCI DSS, waf, owasp, Pete Silva, security, ips, vulnerabilities, compliance, web, internet, cybercrime, web application, identity theft1.1KViews0likes1CommentPEM: Subscriber-Aware Policy and Why Every Large Network Needs One
Previous post “PEM: Key Component of the Next Generation University Network” provided a high-level overview of several Policy Enforcement Manager features which help K-12 Schools, Colleges and Universities transform their Networks into agile, user-focused “Data Delivery Fabrics” which redefine the way Educational Institutions provide data connectivity services to students, faculty, staff, and guests. As with all networks, schools provide access to internal resources as well as the Internet. Typically, internal network (LAN) traffic is not a major concern for network admins (although at some point WiFi saturation prompts infrastructure expansion), but Internet link saturation is a much more common and serious issue. Since any expansion of Internet access is associated with increased ongoing operating expense (opex) and, in many cases, infrastructure expansion resulting in upfront capital expense (capex). Even when an institution can afford a larger ISP link, regional Internet service providers (ISPs) may not offer the required bandwidth, or the ISP lacks sufficient infrastructure to support and/or provide increased bandwidth resources. Nobody likes slow Internet. From myriad apps constantly pulling data in the background to the always-connected lifestyle of millennial students, the need for a fast, reliable, and low-latency connection is now more critical than ever. In the environment with limited resources, such as a school’s ISP link, it is critical to have the ability to control and distribute these resources according to priorities which maximize user’s experience while still providing a healthy mix of QoS for different types of traffic. F5’s Policy Enforcement Manager (PEM) has a number of facilities to enable schools to achieve the optimal balance between performance and traffic priority. Policies, bandwidth controllers, traffic intelligence categories, and presets are among those facilities. Today we will talk about the core PEM functionality - Enforcementpolicies. There are 3 main types of PEM Enforcement policies: Pic 1. PEM Enforcement policy types Global Policy: Applied to all users: known and unknown Subscriber Policy: Applied to known users: provisioned statically or discovered via DHCP, Radius or Access Profiles & iRules Unknown Subscriber Policy: Applied to unknown users PEM uses various subscriber discovery methods which usually differ by implementation. RADIUS and DHCP “sniffing” are among the configurable discovery methods. When PEM sees traffic, it checks whether the source IP address belongs to any known user (previously-discovered subscriber). If the user is known, traffic is classified and appropriate action is taken according to Subscriber Policy of that user. However, if the source IP address is not known to PEM, the Unknown Subscriber Policy is used until that user is discovered. Global Policy is applied to all users and may contain high-level rules applicable to all users in the network (e.g. blocking of malicious URLs, suppression of certain P2P applications, etc.). Pic 2. PEM Policies example Each user can be assigned a Subscriber Policy, and as long as the user is known to PEM, all traffic associated with that user will be analyzed and given priorities according to the policy rules. Among other functions, rules are used to provide application visibility by categorizing both encrypted and unencrypted traffic into categories. URL filtering and blocking actions are also provisioned using PEM Policy rules. PEM can associate a rule with the traffic using any of the following: Classification URL category Flow Custom Classification Pic 3. Policy rule Classification example The Classification tab in enforcement policy rules has a flexible definition to match an Application or Category from the extensive list provided in drop-down menu. PEM uses signatures to detect the applications. These signatures are updated periodically by F5 and PEM can be configured to check for Signature updates automatically Daily, Weekly or Monthly. Matching criteria can provide a positive or negative matching, allowing for granular actions like QoS/bandwidth control, reporting or TCP optimization to be applied to various classified traffic types. URL category Pic 4. URL categories and URLDB URLs can be categorized according to pre-defined or custom definitions. PEM can also use external URLDB/Feed list which makes it easy to extend pre-defined Categories list and maintain central reference for Categorized URLs. URLDB is a CSV file that contains website URL and associated category ID Pic 5. Custom URLDB content example Flow PEM can use flow information as a condition to apply an enforcement policy rule. There are various types of flow-specific properties that can be configured as a matching condition: DSCP Value, Protocol, IP Type, Source/Destination Address/Port, VLAN, etc. Pic 6. Flow condition rule example Like any other BIG-IP module, PEM functionality can be extended and customized using iRules.Custom tab allows user to configure a specific condition not covered by built-in PEM functionality. As always, iRules are a powerful and flexible way to extend platform functionality. Please refer to DevCentraliRules API Wiki for PEM-specific iRules syntax. Enforcement policy rules are defined to perform a specific action within policy: limit bandwidth, close the “Gate” (block the traffic), redirect, insert HTML content, log messages etc. Some items may only be applicable to service providers - i.e. Application reporting and Rating Groups, therefore we will focus on configuration items that will be most commonly used by Education network admins. Reporting: usage, QoE, TCP Analytics Gate Status Forwarding Modify Header Insert Content QoS TCP Optimization Congestion Detection Custom Action (iRule) Rather than describing each feature separately, let’s consider a few common use cases for these rules. For example, we can create 2 rules that block all traffic classified as “Phishing and other Frauds” by assigning a Gate Status “Disabled” and limit the bandwidth of Skype to 10Mbps max system-wide and 1Mbps max per user. The Classification rule will look similar to: Pic 7. Flow condition rule example Bandwidth limiting rule uses Bandwidth controllers within QoS section: The resulting Enforcement Policy will protect users from phishing and other fraudulent sites while limiting the bandwidth of Skype (including Video calls) to 1Mbps per user (and 10 Mbps total allocated for Skype application traffic). Flexible, user-aware classifications and a variety of traffic actions can be taken by individual rules to create the intelligent environment of flexible micro-granular control. This approach balances apps and services by both speed and priority, protects users (on-campus and remote students, staff, and visitors) from fraudulent and malicious activities and enhances overall quality of user experience by optimizing TCP and pacing video by preventing congestion on the ISP link. Institutions of any size can immediately start enjoying the incredible benefits that come with introduction of PEM Policies into their network. F5 engineers are available to make every project a success, helping customers from inception to a successful deployment. Next, we will dive into how PEM can save ISP link bandwidth by forcing streaming video to fallback to lower resolution while supporting the encrypted QUIC protocol. Stay tuned!699Views0likes1CommentSome questions about ASM module from a beginner
Hello Everyone, My company recently bought some ASM licences for our F5 Big IP and i'm in charge of defining the security policies but I have no experience in it so far and a read only account so it's pretty hard to run some tests and that's why i have some questions for you: 1/What's the difference between Transparent and blocking in Enforcement mod and what suits the most with both of them in signature set (learn/alarm/block)? 2/What does "staging signature" means? What if i dont set a signature set, what does the policy block? 3/ What's the difference between Block in policy (enforcement mod) and block in signature set option? Also correct me if i'm wrong but learn allows me to use the "manual traffic learning" option to see which threats the policy has detected and alarm is a log system-like? 4/What happen if i activate both block option? 5/Scenario that would be much alike what i will do to deploy my policies: I want to observe which threats and who are doing them on my VS already in production before deciding what to block, what would be the best configuration: Transparent as "enforcement mod", "attack signatures configuration" in learn/alarm mod with and ERP of let's say 30 days or something else? After finishing my analyzes, where can i see what have been signaled by the signatures and where can i decide if i block then or not. 8/What happen once the ERP is over? Do I have to change the enforcement mod once the analyse is over (Transparent ->blocking for exemple). Will my policy keep checking if new threat will be detected? I know it's a lot of questions to answer but i have no one else to turn to so thank you very much in advance. Regards,556Views0likes5CommentsVulnerability Assessment with Application Security
The longer an application remains vulnerable, the more likely it is to be compromised. Protecting web applications is an around-the-clock job. Almost anything that is connected to the Internet is a target these days, and organizations are scrambling to keep their web properties available and secure. The ramifications of a breach or downtime can be severe: brand reputation, the ability to meet regulatory requirements, and revenue are all on the line. A 2011 survey conducted by Merrill Research on behalf of VeriSign found that 60 percent of respondents rely on their websites for at least 25 percent of their annual revenue. And the threat landscape is only getting worse. Targeted attacks are designed to gather intelligence; steal trade secrets, sensitive customer information, or intellectual property; disrupt operations; or even destroy critical infrastructure. Targeted attacks have been around for a number of years, but 2011 brought a whole new meaning to advanced persistent threat. Symantec reported that the number of targeted attacks increased almost four-fold from January 2011 to November 2011. In the past, the typical profile of a target organization was a large, well-known, multinational company in the public, financial, government, pharmaceutical, or utility sector. Today, the scope has widened to include almost any size organization from any industry. The attacks are also layered in that the malicious hackers attempt to penetrate both the network and application layers. To defend against targeted attacks, organizations can deploy a scanner to check web applications for vulnerabilities such as SQL injection, cross site scripting (XSS), and forceful browsing; or they can use a web application firewall (WAF) to protect against these vulnerabilities. However a better, more complete solution is to deploy both a scanner and a WAF. BIG-IP Application Security Manager (ASM) version 11.1 is a WAF that gives organizations the tools they need to easily manage and secure web application vulnerabilities with multiple web vulnerability scanner integrations. As enterprises continue to deploy web applications, network and security architects need visibility into who is attacking those applications, as well as a big-picture view of all violations to plan future attack mitigation. Administrators must be able to understand what they see to determine whether a request is valid or an attack that requires application protection. Administrators must also troubleshoot application performance and capacity issues, which proves the need for detailed statistics. With the increase in application deployments and the resulting vulnerabilities, administrators need a proven multi-vulnerability assessment and application security solution for maximum coverage and attack protection. But as many companies also support geographically diverse application users, they must be able to define who is granted or denied application access based on geolocation information. Application Vulnerability Scanners To assess a web application’s vulnerability, most organizations turn to a vulnerability scanner. The scanning schedule might depend on a change control, like when an application is initially being deployed, or other factors like a quarterly report. The vulnerability scanner scours the web application, and in some cases actually attempts potential hacks to generate a report indicating all possible vulnerabilities. This gives the administrator managing the web security devices a clear view of all the exposed areas and potential threats to the website. It is a moment-in-time report and might not give full application coverage, but the assessment should give administrators a clear picture of their web application security posture. It includes information about coding errors, weak authentication mechanisms, fields or parameters that query the database directly, or other vulnerabilities that provide unauthorized access to information, sensitive or not. Many of these vulnerabilities would need to be manually re-coded or manually added to the WAF policy—both expensive undertakings. Another challenge is that every web application is different. Some are developed in .NET, some in PHP or PERL. Some scanners execute better on different development platforms, so it’s important for organizations to select the right one. Some companies may need a PCI DSS report for an auditor, some for targeted penetration testing, and some for WAF tuning. These factors can also play a role in determining the right vulnerability scanner for an organization. Ease of use, target specifics, and automated testing are the baselines. Once an organization has considered all those details, the job is still only half done. Simply having the vulnerability report, while beneficial, doesn’t mean a web app is secure. The real value of the report lies in how it enables an organization to determine the risk level and how best to mitigate the risk. Since re-coding an application is expensive and time-consuming, and may generate even more errors, many organizations deploy a web application firewall like BIG-IP ASM. A WAF enables an organization to protect its web applications by virtually patching the open vulnerabilities until it has an opportunity to properly close the hole. Often, organizations use the vulnerability scanner report to then either tighten or initially generate a WAF policy. Attackers can come from anywhere, so organizations need to quickly mitigate vulnerabilities before they become threats. They need a quick, easy, effective solution for creating security policies. Although it’s preferable to have multiple scanners or scanning services, many companies only have one, which significantly impedes their ability to get a full vulnerability assessment. Further, if an organization’s WAF and scanner aren’t integrated, neither is its view of vulnerabilities, as a non-integrated WAF UI displays no scanner data. Integration enables organizations both to manage the vulnerability scanner results and to modify the WAF policy to protect against the scanner’s findings—all in one UI. Integration Reduces Risk While finding vulnerabilities helps organizations understand their exposure, they must also have the ability to quickly mitigate found vulnerabilities to greatly reduce the risk of application exploits. The longer an application remains vulnerable, the more likely it is to be compromised. F5 BIG-IP ASM, a flexible web application firewall, enables strong visibility with granular, session-based enforcement and reporting; grouped violations for correlation; and a quick view into valid and attack requests. BIG-IP ASM delivers comprehensive vulnerability assessment and application protection that can quickly reduce web threats with easy geolocation-based blocking—greatly improving the security posture of an organization’s critical infrastructure. BIG-IP ASM version 11.1 includes integration with IBM Rational AppScan, Cenzic Hailstorm, QualysGuard WAS, and WhiteHat Sentinel, building more integrity into the policy lifecycle and making it the most advanced vulnerability assessment and application protection on the market. In addition, administrators can better create and enforce policies with information about attack patterns from a grouping of violations or otherwise correlated incidents. In this way, BIG-IP ASM enables organizations to mitigate threats in a timely manner and greatly reduce the overall risk of attacks and solve most vulnerabilities. With multiple vulnerability scanner assessments in one GUI, administrators can discover and remediate vulnerabilities within minutes from a central location. BIG-IP ASM offers easy policy implementation, fast assessment and policy creation, and the ability to dynamically configure policies in real time during assessment. To significantly reduce data loss, administrators can test and verify vulnerabilities from the BIG-IP ASM GUI, and automatically create policies with a single click to mitigate unknown application vulnerabilities. Security is a never-ending battle. The bad guys advance, organizations counter, bad guys cross over—and so the cat and mouse game continues. The need to properly secure web applications is absolute. Knowing what vulnerabilities exist within a web application can help organizations contain possible points of exposure. BIG-IP ASM v11.1 offers unprecedented web application protection by integrating with many market-leading vulnerability scanners to provide a complete vulnerability scan and remediate solution. BIG-IP ASM v11.1 enables organizations to understand inherent threats and take specific measures to protect their web application infrastructure. It gives them the tools they need to greatly reduce the risk of becoming the next failed security headline. ps Resources: F5’s Certified Firewall Protects Against Large-Scale Cyber Attacks on Public-Facing Websites IPS or WAF Dilemma F5 Case Study: WhiteHat Security Oracle OpenWorld 2011: BIG-IP ASM & Oracle Database Firewall Audio White Paper - Application Security in the Cloud with BIG-IP ASM The Big Attacks are Back…Not That They Ever Stopped Protection from Latest Network and Application Attacks The New Data Center Firewall Paradigm – White Paper Vulnerability Assessment with Application Security – White Paper F5 Security Vignette: Hacktivism Attack – Video F5 Security Vignette: DNSSEC Wrapping – Video Jeremiah Grossman blog Technorati Tags: F5, big-ip, virtualization, cloud computing, Pete Silva, security, waf, web scanners, compliance, application security, internet, TMOS, big-ip, asm443Views0likes0CommentsPEM: Key Component of the Next Generation University Network
In recent years, higher education institutions have become significant providers of digital services and content, ranging from mesh WiFi access to virtual-classroom services featuring high-bandwidth real-time collaboration experiences for on-campus and remote students alike. In fact, many Universities’ IT networks have become so large, they now compete with some regional Service Providers based on the amount of data they process and route within their IT infrastructure. Students, classrooms, staff, and guests all need to have reliable access to Campus LAN and Internet services simultaneously. However, with growing number of consumers, internal and outbound routes can become quickly saturated and oversubscribed, resulting in slow response times and degraded performance of the entire university network. To prevent chaos and limit data-hungry devices from clogging up data links, Universities have begun to employ certain services usually found in Service Provider (SP) networks. In particular, Policy and Charging Control (PCC) elements that: Are subscriber - aware Assign QoS to applications and services Perform application layer data inspection Enforce subscriber and application policies Ensure compliance with State and Federal laws Prevent access to inappropriate content Provide visibility and reporting So, how does the modern University achieve this without having to build a full-blown Evolved Packet Core inside their IT Network? Some have implemented the list in parts using different network elements, but this approach offers limited centralized visibility and/or traffic control, while others use the aging Cisco SCE, which will be End-of-Life on September 30th of 2018. The most progressive University IT teams quickly realized the benefits of having a subscriber-aware policy enforcement device, and turned to F5 Policy Enforcement Manager(PEM) as a full and integral solution that optimizes network resources and allows for optimal channel utilization, ultimately leading to improved user experience and substantial financial savings for Universities due to much more efficient use of available bandwidth. Pic 1. F5 Policy Enforcement Manager Any school or other organization which implemented PEM in their network can achieve a “subscriber” (in SP terms) or end-user (in enterprise terms) granularity. That means every user connecting to the School or University network can be assigned a Policy with certain rules which dictate how this user will be treated by the network. For example, some students may be given a preferential access to certain network resources and applications while faculty members may have an unrestricted Internet access with higher priority during classes and post-class activities. By categorizing users and applications network can achieve better utilization, ensure fair resource consumption and provide the best experience for all users Pic 2. Per-Subscriber Policy In addition to subscribers, PEM also implements a “per-application” concept. It provides the most comprehensive and agile configuration of policies when combined with subscriber and global policy scopes. This capability enables the University to limit or block certain application types - i.e. P2P Torrent traffic, various messengers, or social networks. Pic 3. Per-Application Policy SSL Visibility is a crucial part of Network monitoring and content filtering in Public Networks. By terminating the SSL (or TLS) connection from users and establishing new SSL connection to application servers, PEM makes it possible to perform: SNI analysis and classification Traffic content inspection and manipulation Detailed reporting and data visualization Pic 4. SSL Forward Proxy URL Classification and Filtering is another important aspect of managing IT network in Schools or Universities. Age-appropriate content must be enforced for students and other users, while maintaining the up-to-date list of blacklisted and malicious websites. PEM utilizes a Webroot-provided DB for precise URL categorization. With more than 80 URL categories available including live updates and custom categories, URL classification and enforcement becomes an effortless and efficient automatic routine. PEM also enables custom HTML content insertion into HTTP traffic, which can be used to warn users about a potentially harmful website or blocked internet resource by URL Filtering engine. Pic 5. URL Classification and Enforcement Schools can also realize significant savings on bandwidth by using Policy Enforcement Manager’s Video Pacing feature. PEM ensures that video content is pre-loaded at the same or similar pace as consumed by the user. By doing so it eliminates wasted bandwidth and traffic spikes that are produced by multiple users accessing video resources at the same time. Without video pacing, video pre-loading is triggered when a user starts watching content, making entire length of content available for viewing. Sometimes users stop watching the content before the end of video file, effectively throwing out unconsumed portion of pre-loaded video. PEM ensures that no unnecessary content is pre-loaded, so that no bandwidth is wasted. Pic 6. Video Pre-loaded, no pacing used Pic 7. PEM uses video pacing Network Visibility and Reporting plays a significant role in the Network Management domain. By knowing exactly what is happening in near real-time, Network Administrators are empowered to identify violations and fix issues before they impact other users in school network. PEM provides both on-the-box analytics and exported data to be used for reporting and visualization using third party tools. Pic 8. Data export options Policy Enforcement Manager enables Schools or Universities to implement alternative, Service Provider-oriented network architectures delivering: More granular control and visibility Optimized user experience Savings on Internet Services. Next in this series, we will be diving into deeper detail on how Universities can best leverage the various features of PEM covered here. Stay tuned!345Views2likes0CommentsBYOD Policies – More than an IT Issue Part 4: User Experience and Privacy
#BYOD or Bring Your Own Device has moved from trend to an permanent fixture in today's corporate IT infrastructure. It is not strictly an IT issue however. Many groups within an organization need to be involved as they grapple with the risk of mixing personal devices with sensitive information. In my opinion, BYOD follows the classic Freedom vs. Control dilemma. The freedom for user to choose and use their desired device of choice verses an organization's responsibility to protect and control access to sensitive resources. While not having all the answers, this mini-series tries to ask many the questions that any organization needs to answer before embarking on a BYOD journey. Enterprises should plan for rather than inherit BYOD. BYOD policies must span the entire organization but serve two purposes - IT and the employees. The policy must serve IT to secure the corporate data and minimize the cost of implementation and enforcement. At the same time, the policy must serve the employees to preserve the native user experience, keep pace with innovation and respect the user's privacy. A sustainable policy should include a clear BOYD plan to employees including standards on the acceptable types and mobile operating systems along with a support policy showing the process of how the device is managed and operated. Some key policy issue areas include: Liability, Device Choice, Economics, User Experience & Privacy and a trust Model. Today we look at User Experience & Privacy. User Experience and Privacy Most application deployments have the user experience in mind and BYOD is no different. Employees want and need fast and secure access to the right resources, at the right time to accomplish their job. BYOD only enhances or increases the need for a rich user experience. Understand how the policy impacts user experience including battery life. Some apps can drain battery life quickly, which in turn decreases user satisfaction and can potentially limit their interactions. There may be instances where the user has chosen a third-party email application verses either the native email client or one that's supported by corporate. Certainly a dilemma but as stated earlier, a policy should state what's allowed and not allowed. MDM technology is also improving to the point that Secure apps like a browser, email client and other resources are secured on the client device. A user can still use their email client of choice for personal use but work email is delivered through the secure email client. While user experience can contribute to the happiness and productivity of the user/employee, privacy can be a huge issue when BYOD is implemented. A 2010 Supreme Court case, City of Ontario v. Quon, looked at the extent to which the right to privacy applies to electronic communications in a government workplaces. This case also looked at Fourth Amendment rights against unreasonable search and seizure. Essentially, a number of police officers were fired for sending sexually explicit message with a city issued device. The city requested an audit of the overages along with the sent messages. The officers sued since the agreement/policy they had with the city allowed them to send personal notes and pay for any overages that might occur. Plus they claimed that their constitutional right was violated along with their privacy under federal communications laws. The court ruled that since they were using city issued devices, the municipality was well within their rights to search since it was work related and it had not violated the Fourth Amendment. If everything was the same but the devices were personally owned by the officers in question, then the city could be in violation and liable. Within the BYOD policy, organizations should also establish a social contract that communicates how and when IT will monitor the device along with when/how/why a device could be wiped. As part of the BYOD Policy the User Experience & Privacy Checklist, while not inclusive, should: · Identify what activities and data must be monitored · Determine the circumstances when a device wipe must occur · Determine how employees can self-remediate · Determine which core services will be delivered to users · Draft a BYOD social contract with Human Resources ps Related BYOD Policies – More than an IT Issue Part 1: Liability BYOD Policies – More than an IT Issue Part 2: Device Choice BYOD Policies – More than an IT Issue Part 3: Economics BYOD–The Hottest Trend or Just the Hottest Term FBI warns users of mobile malware Will BYOL Cripple BYOD? Freedom vs. Control What’s in Your Smartphone? Worldwide smartphone user base hits 1 billion SmartTV, Smartphones and Fill-in-the-Blank Employees Evolving (or not) with Our Devices The New Wallet: Is it Dumb to Carry a Smartphone? Bait Phone BIG-IP Edge Client 2.0.2 for Android BIG-IP Edge Client v1.0.4 for iOS New Security Threat at Work: Bring-Your-Own-Network Legal and Technical BYOD Pitfalls Highlighted at RSA271Views0likes0CommentsCloudFucius Says: AAA Important to the Cloud
While companies certainly see a business benefit to a pay-as-you-go model for computing resources, security concerns seem always to appear at the top of surveys regarding cloud computing. These concerns include authentication, authorization, accounting (AAA) services; encryption; storage; security breaches; regulatory compliance; location of data and users; and other risks associated with isolating sensitive corporate data. Add to this array of concerns the potential loss of control over your data, and the cloud model starts to get a little scary. No matter where your applications live in the cloud or how they are being served, one theme is consistent: You are hosting and delivering your critical data at a third-party location, not within your four walls, and keeping that data safe is a top priority. Most early adopters began to test hosting in the cloud using non-critical data. Performance, scalability, and shared resources were the primary focus of initial cloud offerings. While this is still a major attraction, cloud computing has matured and established itself as yet another option for IT. More data—including sensitive data—is making its way to the cloud. The problem is that you really don’t know where in the cloud the data is at any given moment. IT departments are already anxious about the confidentiality and integrity of sensitive data; hosting this data in the cloud highlights not only concerns about protecting critical data in a third-party location but also role-based access control to that data for normal business functions. Organizations are beginning to realize that the cloud does not lend itself to static security controls. Like all other elements within cloud architecture, security must be integrated into a centralized, dynamic control plane. In the cloud, security solutions must have the capability to intercept all data traffic, interpret its context, and then make appropriate decisions about that traffic, including instructing other cloud elements how to handle it. The cloud requires the ability to apply global policies and tools that can migrate with, and control access to, the applications and data as they move from data center to cloud—and as they travel to other points in the cloud. One of the biggest areas of concern for both cloud vendors and customers alike is strong authentication, authorization, and encryption of data to and from the cloud. Users and administrators alike need to be authenticated—with strong or two-factor authentication—to ensure that only authorized personnel are able to access data. And, the data itself needs to be segmented to ensure there is no leakage to other users or systems. Most experts agree that AAA services along with secure, encrypted tunnels to manage your cloud infrastructure should be at the top of the basic cloud services offered by vendors. Since data can be housed at a distant location where you have less physical control, logical control becomes paramount, and enforcing strict access to raw data and protecting data in transit (such as uploading new data) becomes critical to the business. Lost, leaked, or tampered data can have devastating consequences. Secure services based on SSL VPN offer endpoint security, giving IT administrators the ability to see who is accessing the organization and what the endpoint device’s posture is to validate against the corporate access policy. Strong AAA services, L4 and L7 user Access Control Lists, and integrated application security help protect corporate assets and maintain regulatory compliance. Cloud computing, while quickly evolving, can offer IT departments a powerful alternative for delivering applications. Cloud computing promises scalable, on-demand resources; flexible, self-serve deployment; lower TCO; faster time to market; and a multitude of service options that can host your entire infrastructure, be a part of your infrastructure, or simply serve a single application. And one from Confucius himself: I hear and I forget. I see and I remember. I do and I understand. ps263Views0likes1CommentDefense in Depth in Context
In the days of yore, a military technique called Defense-in-Depth was used to protect kingdoms, castles, and other locations where you might be vulnerable to attack. It's a layered defense strategy where the attacker would have to breach several layers of protection to finally reach the intended target. It allows the defender to spread their resources and not put all of the protection in one location. It's also a multifaceted approach to protection in that there are other mechanisms in place to help; and it's redundant so if a component failed or is compromised, there are others that are ready to step in to keep the protection in tack. Information technology also recognizes this technique as one of the 'best practices' when protecting systems. The infrastructure and systems they support are fortified with a layered security approach. There are firewalls at the edge and often, security mechanisms at every segment of the network. Circumvent one, the next layer should net them. There is one little flaw with the Defense-in-Depth strategy - it is designed to slow down attacks, not necessarily stop them. It gives you time to mobilize a counter-offensive and it's an expensive and complex proposition if you are an attacker. It's more of a deterrent than anything and ultimately, the attacker could decide that the benefits of continuing the attack outweigh the additional costs. In the digital world, it is also interpreted as redundancy. Place multiple iterations of a defensive mechanism within the path of the attacker. The problem is that the only way to increase the cost and complexity for the attacker is to raise the cost and complexity of your own defenses. Complexity is the kryptonite of good security and what you really need is security based on context. Context takes into account the environment or conditions surrounding an event to make an informed decision about how to apply security. This is especially true when protecting a database. Database firewalls are critical components to protecting your valuable data and can stop a SQL Injection attack, for instance, in an instant. What they lack is the ability to decipher contextual data like userid, session, cookie, browser type, IP address, location and other meta-data of who or what actually performed the attack. While it can see that a particular SQL query is invalid, it cannot decipher who made the request. Web Application Firewalls on the other hand can gather user side information since many of its policy decisions are based on the user's context. A WAF monitors every request and response from the browser to the web application and consults a policy to determine if the action and data are allowed. It uses such information as user, session, cookie and other contextual data to decide if it is a valid request. Independent technologies that protect against web attacks or database attacks are available, but they have not been linked to provide unified notification and reporting. Now imagine if your database was protected by a layered, defense-in-depth architecture along with the contextual information to make informed, intelligent decisions about database security incidents. The integration of BIG-IP ASM with Oracle's Database Firewall offers the database protection that Oracle is known for and the contextual intelligence that is baked into every F5 solution. Unified reporting for both the application firewall and database firewall provides more convenient and comprehensive security monitoring. Integration between the two security solutions offers a holistic approach to protecting web and database tiers from SQL injection type of attacks. The integration gives you the layered protection many security professionals recognize as a best practice, plus the contextual information needed to make intelligent decisions about what action to take. This solution provides improved SQL injection protection to F5 customers and correlated reporting for richer forensic information on SQL injection attacks to Oracle database customers. It’s an end-to-end web application and database security solution to protect data, customers, and their businesses. ps Resources: F5 Joins with Oracle to Offer Enhanced Security for Web-Based Database Applications Security for Web-Based Database Applications Enhanced With F5 and Oracle Product Integration Using Oracle Database Firewall with BIG-IP ASM F5 Networks Adds To Oracle Database Oracle Database Firewall BIG-IP Application Security Manager The “True Security Company” Red Herring F5 Friday: Two Heads are Better Than One260Views0likes0CommentsCSRF Prevention with F5's BIG-IP ASM v10.2
Watch how BIG-IP ASM v10.2 can prevent Cross-site request forgery. Shlomi Narkolayev demonstrates how to accomplish a CSRF attack and then shows how BIG-IP ASM stops it in it's tracks. The configuration of CSRF protection is literally a checkbox.258Views0likes0Comments