compliance
47 TopicsMaking WAF Simple: Introducing the OWASP Compliance Dashboard
Whether you are a beginner or an expert, there is a truth that I want to let you in on; building and maintaining Web Application Firewall (WAF) security policies can be challenging. How much security do you really need? Is your configuration too much or too little? Have you created an operational nightmare? Many well-intentioned administrators will initially enable every available feature, thinking that it is providing additional security to the application, when in truth, it is hindering it. How, you may ask? False positives and noise. The more noise and false positives, the harder it becomes to find the real attacks and the increased likelihood that you begin disabling features that ARE providing essential security for your applications. So… less is better then? That isn't the answer either, what good are our security solutions if they aren't protecting against anything? The key to success and what we will look at further in this article, is implementing best practice controls that are both measurable and manageable for your organization. The OWASP Application Security Top 10 is a well-respected list of the ten most prevalent and dangerous application layer attacks that you almost certainly should protect your applications from. By first focusing your security controls on the items in the OWASP Top 10, you are improving the manageability of your security solution and getting the most "bang for your buck". Now, the challenge is, how do you take such a list and build real security protections for your applications? Introducing the OWASP Compliance Dashboard Protecting your applications against the OWASP Top 10 is not a new thing, in fact, many organizations have been taking this approach for quite some time. The challenge is that most implementations that claim to "protect" against the OWASP Top 10 rely solely on signature-based protections for only a small subset of the list and provide zero insight into your compliance status. The OWASP Compliance Dashboard introduced in version 15.0 on BIG-IP Advanced WAF reinvents this idea by providing a holistic and interactive dashboard that clearly measures your compliancy against the OWASP Application Security Top 10. The Top 10 is then broken down into specific security protections including both positive and negative security controls that can be enabled, disabled, or ignored directly on the dashboard. We realize that a WAF policy alone may not provide complete protection across the OWASP Top 10, this is why the dashboard also includes the ability to review and track the compliancy of best practices outside the scope of a WAF alone, such as whether the application is subject to routine patching or vulnerability scanning. To illustrate this, let’s assume I have created a brand new WAF policy using the Rapid Deployment policy template and accepted all default settings, what compliance score do you think this policy might have? Let's take a look. Interesting. The policy is 0/10 compliant and only A2 Broken Authentication and A3 Sensitive Data Exposure have partial compliance. Why is that? The Rapid Deployment template should include some protections by default, shouldn't it? Expanding A1 Injection, we see a list of protections required in order to be marked as compliant. Hovering over the list of attack signatures, we see that each category of signature is in 'Staging' mode, aha! Signatures in staging mode are not enforced and therefore cannot block traffic. Until the signature set is in enforced, we do not mark that protection as compliant. For those of you who have mistakenly left entities such as Signatures in staging for longer than desired, this is also a GREAT way to quickly find them. I also told you we could also interact with the dashboard to influence the compliancy score, lets demonstrate that. Each item can be enforced DIRECTLY on the dashboard by selecting the "Enforce" checkmark on the right. No need to go into multiple menus, you can enforce all these security settings on a single page and preview the compliance status immediately. If you are happy with your selection, click on "Review & Update" to perform a final review of what the dashboard will be configuring on your behalf before you can click on "Save & Apply Policy". Note: Enforcing signatures before a period of staging may not be a good idea depending on your environment. Staging provides a period to assess signature matches in order to eliminate false positives. Enforcing these signatures too quickly could result in the denying of legitimate traffic. Let's review the compliancy of our policy now with these changes applied. As you can see, A1 Injection is now 100% compliant and other categories have also had their score updated as a result of enforcing these signatures. The reason for this is because there is overlap in the security controls applied acrossthese other categories. Not all security controls can be fully implemented directly via the dashboard, and as mentioned previously, not all security controls are signature-based. A6 Cross-Site Scripting was recalculated as 50% complaint with the signatures we enforced previously so let's take a look at what else it required for full compliancy. The options available to us are to IGNORE the requirement, meaning we will be granted full compliancy for that item without implementing any protection, or we can manually configure the protection referenced. We may want to ignore a protection if it is not applicable to the application or if it is not in scope for your deployment. Be mindful that ignoring an item means you are potentially misrepresenting the score of your policy, be very certain that the protection you are ignoring is in fact not applicable before doing so. I've selected to ignore the requirement for "Disallowed Meta Characters in Parameters" and my policy is now 100% compliance for A7 Cross-Site Scripting (XSS). Lastly, we will look at items within the dashboard that fall outside the scope of WAF protections. Under A9 Using Components with Known Vulnerabilities, we are presented with a series of best practices such as “Application and system hardening”, “Application and system patching” and “Vulnerability scanner integration”. Using the dashboard, you can click on the checkmark to the right for "Requirement fulfilled" to indicate that your organization implements these best practices. By doing so, the OWASP Compliance score updates, providing you with real-time visibility into the compliancy for your application. Conclusion The OWASP Compliance Dashboard on BIG-IP Advanced WAF is a perfect fit for the security administrator looking to fine-tune and measure either existing or new WAF policies against the OWASP App Security Top 10. The OWASP Compliance Dashboard not only tracks WAF-specific security protections but also includes general best practices, allowing you to use the dashboard as your one-stop-shop to measure the compliancy for ALL your applications. For many applications, protection against the OWASP Top 10 may be enough, as it provides you with best practices to follow without having to worry about which features to implement and where. Note: Keep in mind that some applications may require additional controls beyond the protections included in the OWASP Top 10 list. For teams heavily embracing automation and CI/CD pipelines, logging into a GUI to perform changes likely does not sound appealing. In that case, I suggest reading more about our Declarative Advanced WAF policy framework which can be used to represent the WAF policies in any CI/CD pipeline. Combine this with the OWASP Compliance Dashboard for an at-a-glance assessment of your policy and you have the best of both worlds. If you're not already using the OWASP Compliance Dashboard, what are you waiting for? Look out for Bill Brazill, Victor Granic and myself (Kyle McKay) on June 10th at F5 Agility 2020 where we will be presenting and facilitating a class called "Protecting against the OWASP Top 10". In this class, we will be showcasing the OWASP Compliance Dashboard on BIG-IP Advanced WAF further and providing ample hands-on time fine-tuning and measuring WAF policies for OWASP Compliance. Hope to see you there! To learn more, visit the links below. Links OWASP Compliance Dashboard: https://support.f5.com/csp/article/K52596282 OWASP Application Security Top 10: https://owasp.org/www-project-top-ten/ Agility 2020: https://www.f5.com/agility/attend7.5KViews8likes0CommentsWhat To Expect In 2017: Security And Government Regulations
The government and cloud security's relationship is surprisingly hands off. Current regulations already extend their umbrellas over our data in flight and rest regardless who's IaaS/SaaS you're using. For us traditional enterprise administrators, the regulations are established and and we follow them to because we're all perfect and deserve raises. But when it comes to "the cloud" we've introduced developers and application admins releasing services to the general public with great hates; sometimes without the checks and balances needed for compliance. The results are mixed. Increasingly popular scan-all-the-things method of finding vulnerable systems are weeding out quite a few unprotected cloud-connected data sets. Even the smallest vendor needs to validate their compliance requirements and implement them at the same pace they're implementing publicly available applications. HIPAA The Health Insurance Portability and Accountability Act of 1996 (HIPAA) completed enforcement of security policies on personal health care information (PHI) in 2006. HIPPA includes polices related from control and auditing to intrusion prevention and alerting, data validation, authentication practices, and risk analysis and remediation plans (and a host of other things we admins don't care as much about). We know we're getting compliance wrong because as of January 31st of 2017, the Office of Civil Rights has received since 2003 over 148,292 complaints of violation (complaint != violation). 2017 will see more and more companies deploying cloud services that will start to gray the area between basic PII and PHI. Think Strava recording your epic bike ride or your Garmin tracking your last run... all store data relevant to you and how it relates to your physical condition. What's more interesting to investigate is what are your rights to your last bike ride's information? Can it be sold with only basic de-identification? The boundaries between PHI and PII are blurring from our desires to connect our selves so expect a lot of angry people when an insurance provider is found denying a claim based off "acquired" Fitbit data. SOX Thanks ENRON (and Tyco/WorldCom) for getting the Sarbanes-Oxley (SOX) Act of 2020 thrust onto all publicly traded companies. SOX regulates financial practices and corporate governance divided into 11 titles most of which are related to enforcing basic ethics we apparently take for granted. Section 802 is a whole different InfoSec ball game regulating data retention, classification, and records keeping to ensure the shredder doesn't get used too much. And the cloud has made complying with 802's requirements much. Data governance tools, DLP, and enhanced record keeping tools are being introduced into all of our favorite cloud apps from Office 365 to Slack. It's assumed SOX will play a requirement for many cloud applications so the needed technologies should exist out of the gate. FIPS The Federal Information Processing Standards (FIPS) standardizes government use in computer systems by non-military agencies and contractors. Most people are familiar with FIPS 140-2: Security Requirements for Cryptographic Modules because it's so cool and interesting. FIPS 200: Minimum Security Requirements for Federal Information and Information Systems will be more prevalent as federal providers are encouraged/forced to use authorized cloud resources to migrate off existing internal government IT disasters and deprecating systems. The massive failure of the Office of Personnel Management and the years-blame game still underway is making private could resources more attractive to existing government entities. FIPS 200 will play a vital role for those Iaas/Saas providers to ensure they can receive those federal dollars. It's going to happen, it's already happening; maybe we can stop being embarassed by the federal fire hose of data breaches. FERPA The Family Educational Rights and Privacy Act (FERPA) is not well know to us outside of educational service branches but your child's data is just as important as your PHI information. Specifically targeted at protecting student records, FERPA puts the students data into the governing parent/guardians control and approval. However, the infrastructure responsible for data handling are sometimes the same systems Ferris Bueller hacked into and changed his absentee violations. Like FIPS, educational providers are migrating to the cloud in lieu of massive IT budget shortfalls to upgrading existing infrastructure. Cloud providers are few and far between to slap on a FERPA compliance sticker. Given the coverage provided by other regulations, there shouldn't be too much adjusting for the cloud providers new to the educational market. EU-US Privacy Shield You want to do business with any European Union corporation? You want to build an office in any of those countries? Unless you want to spend the next 100 years working with different data privacy laws enacted by disparate governments, join the EU-US Privacy Shield program. When an org joins the EUUSPS they'll self-certify with Department of Commerce and commit to the existing and future framework requirements. This has massive impact to the larger providers and services living in those geo-located clouds. You essentially become an international data steward who agrees to abide by a larger protection clause than one defined for U.S citizens. It's a good thing. That's what they tell me. EU NIS Directive The Directive on security of network and information systems (EU NIS Directive) was enacted July 6th, 2016 to provide a minimum compliance for data security against cyber-bad things for any country operating in or applying for European Union membership. This was to create a strong "weakest link" for countries with poor infrastructure policies and practices from being penetrated by nation states close to those countries. This directive was created in response to growing shady nation state cyber practices, worrying existing western block EU members. 2017 will test the strength of these policies and their violation penalties. None of this should be of any concern to existing cloud providers as they're operating in developed InfoSec countries. For the increasingly talented developer pool in the eastern EU, this still may be new and a potential stumbling block. Similar to new developers in the U.S. screwing up their PCI-DSS compliance, language barriers and InfoSec may be stumbling blocks exposing private EU citizen data. This is just a summary of some of the regulations that will see more-than-normal impact to the next generation of IaaS/Saas providers and their clients in the coming months. It's apparent from the too-numerous to name breaches, that our data isn't secure and the more we utilize the cloud in our personal lives, the more we're willing to expose. For enterprises though, the alarm is not as needed as we're already seasoned to regulations and how to protect our data. Balancing the agile world of DevOps does threaten stable security practices but your InfoSec team should make you well aware before the fed comes-a-knockin.242Views0likes0CommentsComplying with PCI DSS–Part 3: Maintain a Vulnerability Management Program
According to the PCI SSC, there are 12 PCI DSS requirements that satisfy a variety of security goals. Areas of focus include building and maintaining a secure network, protecting stored cardholder data, maintaining a vulnerability management program, implementing strong access control measures, regularly monitoring and testing networks, and maintaining information security policies. The essential framework of the PCI DSS encompasses assessment, remediation, and reporting. We’re exploring how F5 can help organizations gain or maintain compliance and today is Maintain a Vulnerability Management Program which includes PCI Requirements 5 and 6. To read Part 1, click: Complying with PCI DSS–Part 1: Build and Maintain a Secure Network and Part 2: Complying with PCI DSS–Part 2: Protect Cardholder Data Requirement 5: Use and regularly update antivirus software or programs. PCI DSS Quick Reference Guide description: Vulnerability management is the process of systematically and continuously finding weaknesses in an entity’s payment card infrastructure system. This includes security procedures, system design, implementation, or internal controls that could be exploited to violate system security policy. Solution: With BIG-IP APM and BIG-IP Edge Gateway, F5 provides the ability to scan any remote device or internal system to ensure that an updated antivirus package is running prior to permitting a connection to the network. Once connections are made, BIG-IP APM and BIG-IP Edge Gateway continually monitor the user connections for a vulnerable state change, and if one is detected, can quarantine the user on the fly into a safe, secure, and isolated network. Remediation services can include a URL redirect to an antivirus update server. For application servers in the data center, BIG-IP products can communicate with existing network security and monitoring tools. If an application server is found to be vulnerable or compromised, that device can be automatically quarantined or removed from the service pool. With BIG-IP ASM, file uploads can be extracted from requests and transferred over iCAP to a central antivirus (AV) scanner. If a file infection is detected, BIG-IP ASM will drop that request, making sure the file doesn’t reach the web server. Requirement 6: Develop and maintain secure systems and applications. PCI DSS Quick Reference Guide description: Security vulnerabilities in systems and applications may allow criminals to access PAN and other cardholder data. Many of these vulnerabilities are eliminated by installing vendor-provided security patches, which perform a quick-repair job for a specific piece of programming code. All critical systems must have the most recently released software patches to prevent exploitation. Entities should apply patches to less-critical systems as soon as possible, based on a risk-based vulnerability management program. Secure coding practices for developing applications, change control procedures, and other secure software development practices should always be followed. Solution: Requirements 6.1 through 6.5 deal with secure coding and application development; risk analysis, assessment, and mitigation; patching; and change control. Requirement 6.6 states: “Ensure all public-facing web applications are protected against known attacks, either by performing code vulnerability reviews at least annually or by installing a web application firewall in front of public-facing web applications.” This requirement can be easily met with BIG-IP ASM, which is a leading web application firewall (WAF) offering protection for vulnerable web applications. Using both a positive security model for dynamic application protection and a strong, signature-based negative security model, BIG-IP ASM provides application-layer protection against both targeted and generalized application attacks. It also protects against the Open Web Application Security Project (OWASP) Top Ten vulnerabilities and threats on the Web Application Security Consortium’s (WASC) Threat Classification lists. To assess a web application’s vulnerability, most organizations turn to a vulnerability scanner. The scanning schedule might depend on a change in control, as when an application is initially being deployed, or other triggers such as a quarterly report. The vulnerability scanner scours the web application, and in some cases actually attempts potential attacks, to generate a report indicating all possible vulnerabilities. This gives the administrator managing the web security devices a clear view of all exposed areas and potential threats to the website. Such a report is a moment-in time assessment and might not result in full application coverage, but should give administrators a clear picture of their web application security posture. It includes information about coding errors, weak authentication mechanisms, fields or parameters that query the database directly, or other vulnerabilities that provide unauthorized access to information, sensitive or not. Otherwise, many of these vulnerabilities would need to be manually re-coded or manually added to the WAF policy—both expensive undertakings. Simply having the vulnerability report, while beneficial, doesn’t make a web application secure. The real value of the report lies in how it enables an organization to determine the risk level and how best to mitigate the risk. Since recoding an application is expensive and time-consuming and may generate even more errors, many organizations deploy a WAF like BIG-IP ASM. A WAF enables an organization to protect its web applications by virtually patching the open vulnerabilities until developers have an opportunity to properly close the hole. Often, organizations use the vulnerability scanner report to either tighten or initially generate a WAF policy. While finding vulnerabilities helps organizations understand their exposure, they must also have the ability to quickly mitigate those vulnerabilities to greatly reduce the risk of application exploits. The longer an application remains vulnerable, the more likely it is to be compromised. For cloud deployments, BIG-IP ASM Virtual Edition (VE) delivers the same functionality as the physical edition and helps companies maintain compliance, including compliance with PCI DSS, when they deploy applications in the cloud. If an application vulnerability is discovered, BIG-IP ASM VE can quickly be deployed in a cloud environment, enabling organizations to immediately patch vulnerabilities virtually until the development team can permanently fix the application. Additionally, organizations are often unable to fix applications developed by third parties, and this lack of control prevents many of them from considering cloud deployments. But with BIG-IP ASM VE, organizations have full control over securing their cloud infrastructure. BIG-IP ASM version 11.1 includes integration with IBM Rational AppScan, Cenzic Hailstorm, QualysGuard WAS, and WhiteHat Sentinel, making BIG-IP ASM the most advanced vulnerability assessment and application protection on the market. In addition, administrators can better create and enforce policies with information about attack patterns from a grouping of violations or otherwise correlated incidents. In this way, BIG-IP ASM protects the applications between scanning and patching cycles and against zero-day attacks that signature-based scanners won’t find. Both are critical in creating a secure Application Delivery Network. BIG-IP ASM also makes it easy to understand where organizations stand relative to PCI DSS compliance. With the BIG-IP ASM PCI Compliance Report, organizations can quickly see each security measure required to comply with PCI DSS 2.0 and understand which measures are or are not relevant to BIG-IP ASM functions. For relevant security measures, the report indicates whether the organization’s BIG-IP ASM appliance complies with PCI DSS 2.0. For security measures that are not relevant to BIG-IP ASM, the report explains what action to take to achieve PCI DSS 2.0 compliance. BIG-IP ASM PCI Compliance Report Finally, with the unique F5 iHealth system, organizations can analyze the configuration of their BIG-IP products to identify any critical patches or security updates that may be necessary. Next: Implement Strong Access Control Measures ps446Views0likes1CommentCloudFucius Wonders: Can Cloud, Confidentiality and The Constitution Coexist?
This question has been puzzling a few folks of late, not just CloudFucius. The Judicial/legal side of the internet seems to have gotten some attention lately even though courts have been trying to make sense and catch up with technology for some time, probably since the Electronic Communications Privacy Act of 1986. There are many issues involved here but a couple stand out for CloudFucius. First, there is the ‘Privacy vs. Convenience’ dilemma. Many love and often need the GPS Navigators whether it be a permanent unit in the vehicle or right from our handheld device to get where we need to go. These services are most beneficial when searching for a destination but it is also a ‘tracking bug’ in that, it records every movement we make. This has certainly been beneficial in many industries like trucking, delivery, automotive, retail and many others, even with some legal issues. It has helped locate people during emergencies and disasters. It has also helped in geo-tagging photographs. But, we do give up a lot of privacy, secrecy and confidentiality when using many of the technologies designed to make our lives ‘easier.’ Americans have a rather tortured relationship with privacy. They often say one thing ("Privacy is important to me") but do another ("Sure, thanks for the coupon, here's my Social Security Number") noted Lee Rainie, head of the Pew Internet and American Life Project. From: The Constitutional issues of cloud computing You might not want anyone knowing where you are going but by simply using a navigation system to get to your undisclosed location, someone can track you down. Often, you don’t even need to be in navigation mode to be tracked – just having GPS enabled can leave breadcrumbs. Don’t forget, even the most miniscule trips to the gas station can still contain valuable data….to someone. How do you know if your milk runs to the 7-Eleven aren’t being gathered and analyzed? At the same, where is that data stored, who has access and how is it being used? I use GPS when I need it and I’m not suggesting dumping it, just wondering. Found a story where Mobile Coupons are being offered to your phone. Depending on your GPS location, they can send you a coupon for a nearby merchant along with this one about Location-Based strategies. Second, is the Fourth Amendment in the digital age. In the United States, the 4th Amendment protects against unreasonable searches and seizures. Law enforcement needs to convince a judge that a serious crime has/is occurring to obtain a warrant prior to taking evidence from a physical location, like your home. It focuses on physical possessions and space. For instance, if you are committing crimes, you can place your devious plans in a safe hidden in your bedroom and law enforcement needs to present a search warrant before searching your home for such documents. But what happens if you decide to store your ‘Get rich quick scheme’ planning document in the cloud? Are you still protected? Can you expect certain procedures to be followed before that document is accessed? The Computer Crime & Intellectual Property Section of the US Dept of Justice site states: To determine whether an individual has a reasonable expectation of privacy in information stored in a computer, it helps to treat the computer like a closed container such as a briefcase or file cabinet. The Fourth Amendment generally prohibits law enforcement from accessing and viewing information stored in a computer if it would be prohibited from opening a closed container and examining its contents in the same situation….Although courts have generally agreed that electronic storage devices can be analogized to closed containers, they have reached differing conclusions about whether a computer or other storage device should be classified as a single closed container or whether each individual file stored within a computer or storage device should be treated as a separate closed container. But, you might lose that Fourth Amendment right when you give control to a third party, such as a cloud provider. Imagine you wrote a play about terrorism and used a cloud service to store your document. Maybe there were some ‘surveillance’ keywords or triggers used as character lines. Maybe there is scene at a transportation hub (train, airport, etc) and characters themselves say things that could be taken as domestic threats – out of context of course. You should have some expectation that your literary work is kept just as safe/secure while in the cloud as it is on your powered down hard drive or stack of papers on your desk. And we haven’t even touched on compliance, records retention, computer forensics, data recovery and many other litigating issues. The cases continue to play out and this blog entry only covers a couple of the challenges associated with Cloud Computing and the Law, but CloudFucius will keep an eye on it for ya. Many of the articles found while researching this topic: The Constitutional issues of cloud computing In digital world, we trade privacy for convenience Cloud Computing and the Constitution INTERNET LAW - Search and Seizure of Home Computers in Virginia Time to play catch-up on Internet laws: The gap between technology and America's laws hit home last week in a court decision on network neutrality FCC considers reclassification of Internet in push to regulate it Personal texting on a work phone? Beware your boss High Court Justices Consider Privacy Issues in Text Messaging Case Yahoo wins email battle with US Government How Twitter’s grant to the Library of Congress could be copyright-okay Judge Orders Google To Deactivate User's Gmail Account FBI Warrant Sought Google Apps Content in Spam Case State court rules company shouldn't have read ex-staffer's private e-mails District Took 56,000 Pictures From Laptops Can the Cloud survive regulation? Group challenging enhanced surveillance law faces uphill climb Watchdogs join 'Net heavyweights in call for privacy law reform Digital Due Process Judge's judgment called into question Dept of Justice Electronic Evidence and Search & Seizure Legal Resources Electronic Evidence Case Digest Electronic Evidence Finally, you might be wondering why CloudFucius went from A to C in his series. Well, this time we decided to jump around but still cover 26 interesting topics. And one from Confucius himself: I am not one who was born in the possession of knowledge; I am one who is fond of antiquity, and earnest in seeking it there. ps The CloudFucius Series: Intro, 1234Views0likes1CommentThe Problem with Consumer Cloud Services...
…is that they're consumer #cloud services. While we're all focused heavily on the challenges of managing BYOD in the enterprise, we should not overlook or understate the impact of consumer-grade services within the enterprise. Just as employees bring their own devices to the table, so too do they bring a smattering of consumer-grade "cloud" services to the enterprise. Such services are generally woefully inappropriate for enterprise use. They are focused on serving a single consumer, with authentication and authorization models that support that focus. There are no roles, generally no group membership, and there's certainly no oversight from some mediating authority other than the service provider. This is problematic for enterprises as it eliminates the ability to manage access for large groups of people, to ensure authority to access based on employee role and status, and provides no means of integration with existing ID management systems. Integrating consumer-oriented cloud services into enterprise workflows and systems is a Sisyphean task. Cloud-services replicating what has traditionally been considered enterprise-class services such as CRM and ERP are designed with the need to integrate. Consumer-oriented services are designed with the notion of integration – with other consumer-grade services, not enterprise systems. They lack even the most rudimentary enterprise-class concepts such as RBAC, group-based policy and managed access. SaaS supporting what are traditionally enterprise-class concerns such as CRM and e-mail have begun to enable the integration with the enterprise necessary to overcome what is, according to survey conducted by CloudConnect and Everest Group, the number two inhibitor of cloud adoption amongst respondents. The lack of integration points into consumer-grade services is problematic for both IT – and the service provider. For the enterprise, there is a need to integrate, to control the processes associated with, consumer-grade cloud services. As with many SaaS solutions, the ability to collaborate with data-center hosted services as a means to integrate with existing identity and access control services is paramount to assuaging the concerns that currently exist given the more lax approach to access and identity in consumer-grade services. Integration capabilities – APIs – that enable enterprises to integrate even rudimentary control over access is a must for consumer-grade SaaS looking to find a path into the enterprise. Not only is it a path to monetization (enterprise organizations are a far more consistent source of revenue than are ads or income derived from the sale of personal data) but it also provides the opportunity to overcome the stigma associated with consumer-grade services that have already resulted in "bans" on such offerings within large organizations. There are fundamentally three functions consumer-grade SaaS needs to offer to entice enterprise customers: Control over AAA Enterprises need the ability to control who accesses services and to correlate with authoritative sources of identity and role. That means the ability to coordinate a log-in process that primarily relies upon corporate IT systems to assert access rights and the capability of the cloud-service to accept that assertion as valid. APIs, SAML, and other identity management techniques are invaluable tools in enabling this integration. Alternatively, enterprise-grade management within the tools themselves can provide the level of control required by enterprises to ensure compliance with a variety of security and business-oriented requirements. Monitoring Organizations need visibility into what employees (or machines) may be storing "in the cloud" or what data is being exchanged with what system. This visibility is necessary for a variety of reasons with regulatory compliance most often cited. Mobile Device Management (MDM) and Security Because one of the most alluring aspects of consumer cloud services is nearly ubiquitous access from any device and any location, the ability to integrate #1 and #2 via MDM and mobile-friendly security policies is paramount to enabling (willing) enterprise-adoption of consumer cloud services. While most of the "consumerization" of IT tends to focus on devices, "bring your own services" should also be a very real concern for IT. And if consumer cloud services providers think about it, they'll realize there's a very large market opportunity for them to support the needs of enterprise IT while maintaining their gratis offerings to consumers.249Views0likes1CommentFedRAMP Federates Further
FedRAMP (Federal Risk and Authorization Management Program), the government’s cloud security assessment plan, announced late last week that Amazon Web Services (AWS) is the first agency-approved cloud service provider. The accreditation covers all AWS data centers in the United States. Amazon becomes the third vendor to meet the security requirements detailed by FedRAMP. FedRAMP is the result of the US Government’s work to address security concerns related to the growing practice of cloud computing and establishes a standardized approach to security assessment, authorizations and continuous monitoring for cloud services and products. By creating industry-wide security standards and focusing more on risk management, as opposed to strict compliance with reporting metrics, officials expect to improve data security as well as simplify the processes agencies use to purchase cloud services. FedRAMP is looking toward full operational capability later this year. As both the cloud and the government’s use of cloud services grow, officials found that there were many inconsistencies to requirements and approaches as each agency began to adopt the cloud. Launched in 2012, FedRAMP’s goal is to bring consistency to the process but also give cloud vendors a standard way of providing services to the government. And with the government’s cloud-first policy, which requires agencies to consider moving applications to the cloud as a first option for new IT projects, this should streamline the process of deploying to the cloud. This is an ‘approve once, and use many’ approach, reducing the cost and time required to conduct redundant, individual agency security assessment. AWS's certification is for 3 years. FedRAMP provides an overall checklist for handling risks associated with Web services that would have a limited, or serious impact on government operations if disrupted. Cloud providers must implement these security controls to be authorized to provide cloud services to federal agencies. The government will forbid federal agencies from using a cloud service provider unless the vendor can prove that a FedRAMP-accredited third-party organization has verified and validated the security controls. Once approved, the cloud vendor would not need to be ‘re-evaluated’ by every government entity that might be interested in their solution. There may be instances where additional controls are added by agencies to address specific needs. The BIG-IP Virtual Edition for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. ps Related: Cloud Security With FedRAMP FedRAMP Ramps Up FedRAMP achieves another cloud security milestone Amazon wins key cloud security clearance from government Cloud Security With FedRAMP CLOUD SECURITY ACCREDITATION PROGRAM TAKES FLIGHT FedRAMP comes fraught with challenges F5 iApp template for NIST Special Publication 800-53 Now Playing on Amazon AWS - BIG-IP Connecting Clouds as Easy as 1-2-3 F5 Gives Enterprises Superior Application Control with BIG-IP Solutions for Amazon Web Services Technorati Tags: f5,fedramp,government,cloud,service providers,risk,standards,silva,compliance,cloud security,aws,amazon Connect with Peter: Connect with F5:427Views0likes0CommentsInside Look - F5 Mobile App Manager
I meet with WW Security Architect Corey Marshall to get an Inside Look and detailed demo of F5's Mobile App Manager. BYOD 2.0: Moving Beyond MDM. ps Related: F5's Feeling Alive with Newly Unveiled Mobile App Manager BYOD 2.0 – Moving Beyond MDM with F5 Mobile App Manager F5 MAM Overview F5 BYOD 2.0 Solution Empowers Mobile Workers Is BYO Already D? Will BYOL Cripple BYOD? Freedom vs. Control F5's YouTube Channel In 5 Minutes or Less Series (23 videos – over 2 hours of In 5 Fun) Inside Look Series Technorati Tags: f5,byod,smartphone,mobile,mobile device,saas,research,silva,security,compliance, video Connect with Peter: Connect with F5:271Views0likes0CommentsTraceSecurity: DevOps Meets Compliance
#infosec #devops #bigdata #cloud IT security initiatives can benefit from a devops approach enabled with a flexible framework What do you get when you mix some devops with compliance and its resulting big data? Okay, aside from a migraine just thinking about that concept, what do you get? What TraceSecurity got was TraceCSO – its latest compliance, security, cloud mashup. BIG (OPERATIONAL) DATA The concept of big "operational" data shouldn't be new. Enterprise IT deals with enormous volumes of data in the form of logs generated by the hundreds of systems that make up IT. And the same problems that have long plagued APM (Application Performance Management) solutions are a scourge for security and compliance operations as well: disconnected systems produce disconnected data that can make compliance and troubleshooting even more difficult than it already is. Additional data that should be collected as part of compliance efforts – sign offs, verification, etc.. – often isn't or, if it is, is stored in a file somewhere on the storage network, completely disconnected from the rest of the compliance framework. Now add in the "big data" from regulations and standards that must be factored in. There are a daunting number of controls we have to manage. And we are all under multiple overlapping jurisdictions. There isn't a regulatory body out there that creates an authority document that doesn't, or hasn't overlapped an already existing one. The new US HIPAA/HITECH Acts alone have spun a web of almost 60 Authority Documents that need to be followed. Even PCI refers to almost 5 dozen external Authority Documents and there are at least 20 European Data Protection Laws. -- Information Security Form: Unified Compliance Framework (UCF) While the UCF (Unified Compliance Framework) provides an excellent way to integrate and automate the matching of controls to compliance efforts and manage the myriad steps that must be completed to realize compliance, it still falls on IT to manage many of the manual processes that require sign off or verification or steps that simply cannot be automated. But there's still a process that can be followed, a methodology, that makes it a match for devops. The trick is finding a way to codify those processes in a such a way as to make them repeatable and successful. That's part of what TraceCSO provides – a framework for process codification that factors in regulations and risk and operational data to ensure a smoother, simpler implementation. HOW IT WORKS TraceCSO is a SaaS solution, comprising a cloud-hosted application and a locally deployed vulnerability scanner providing the visibility and interconnectivity necessary to enable process automation. Much like BPM (Business Process Automation) and IAM (Identity and Access Management) solutions, TraceCSO offers the ability to automate processes that may include manual sign-offs, integrating with local identity stores like Active Directory. The system uses wizards to guide the codification process, with many helpful links to referenced regulatory and compliance documents and associated controls. Initial system setup walks through adding users and departments, defining permissions and roles, coordinating network scanning and selecting the appropriate authority documents from which compliance levels can be determined. TraceCSO covers all functional areas necessary to manage an on-going risk-based information security program: Risk Policy Vulnerability Training Vendor Audit Compliance Process Reporting TraceCSO can be integrated with a variety of GRC solutions, though this may entail work on the part of TraceSecurity, the ISV, or the organization. Integration with MDM, for example, is not offered out of the box and thus approaches compliance with proper security policies via an audit process that requires sign-off by responsible parties as designated in the system. Its integrated risk assessment measures against best practices CIA (Confidentiality, Integrity, Availability) expectations. TraceCSO calculates a unique risk score based on CIA measures as well as compliance with authoritative documentation and selected controls, and allows not just a reported risk score over time but the ability to examine unimplemented controls and best practices against anticipated improvements in the risk score. This gives IT and the business a way to choose those control implementations that will offer the best "bang for the buck" and puts more weight behind risk-benefit analysis. By selecting regulations and standards applicable to the organization, TraceCSO can map controls identified during the risk assessment phase to its database of authorities. Technical controls can also be derived from vulnerability scans conducted by the TraceCSO appliance component. TraceCSO is ultimately an attempt to unify the many compliance and risk management functions provided by a variety of disconnected, individual GRC solutions today. By providing a single point of aggregation for risk and compliance management as well process management, the system enables a more comprehensive view of both risk and compliance across all managed IT systems. It's a framework enabling a more devops approach to compliance, which is certainly an area often overlooked in discussions regarding devops methodologies despite the reality that its process-driven nature makes it a perfect fit. The same efficiencies gained through process and task-automation in other areas of IT through devops can also be achieved in the realm of risk and compliance with the right framework in place to assist. TraceCSO looks to be one such framework.219Views0likes0CommentsA More Practical View of Cloud Brokers
#cloud The conventional view of cloud brokers misses the need to enforce policies and ensure compliance During a dinner at VMworld organized by Lilac Schoenbeck of BMC, we had the chance to chat up cloud and related issues with Kia Behnia, CTO at BMC. Discussion turned, naturally I think, to process. That could be because BMC is heavily invested in automating and orchestrating processes. Despite the nomenclature used (business process management) for IT this is a focus on operational process automation, though eventually IT will have to raise the bar and focus on the more businessy aspects of IT and operations. Alex Williams postulated the decreasing need for IT in an increasingly cloudy world. On the surface this generally seems to be an accurate observation. After all, when business users can provision applications a la SaaS to serve their needs do you really need IT? Even in cases where you're deploying a fairly simple web site, the process has become so abstracted as to comprise the push of a button, dragging some components after specifying a template, and voila! Web site deployed, no IT necessary. While from a technical difficulty perspective this may be true (and if we say it is, it is for only the smallest of organizations) there are many responsibilities of IT that are simply overlooked and, as we all know, underappreciated for what they provide, not the least of which is being able to understand the technical implications of regulations and requirements like HIPAA, PCI-DSS, and SOX – all of which have some technical aspect to them and need to be enforced, well, with technology. See, choosing a cloud deployment environment is not just about "will this workload run in cloud X". It's far more complex than that, with many more variables that are often hidden from the end-user, a.k.a. the business peoples. Yes, cost is important. Yes, performance is important. And these are characteristics we may be able to gather with a cloud broker. But what we can't know is whether or not a particular cloud will be able to enforce other policies – those handed down by governments around the globe and those put into writing by the organization itself. Imagine the horror of a CxO upon discovering an errant employee with a credit card has just violated a regulation that will result in Severe Financial Penalties or worse – jail. These are serious issues that conventional views of cloud brokers simply do not take into account. It's one thing to violate an organizational policy regarding e-mailing confidential data to your Gmail account, it's quite another to violate some of the government regulations that govern not only data at rest but in flight. A PRACTICAL VIEW of CLOUD BROKERS Thus, it seems a more practical view of cloud brokers is necessary; a view that enables such solutions to not only consider performance and price, but ability to adhere to and enforce corporate and regulatory polices. Such a data center hosted cloud broker would be able to take into consideration these very important factors when making decisions regarding the optimal deployment environment for a given application. That may be a public cloud, it may be a private cloud – it may be a dynamic data center. The resulting decision (and options) are not nearly as important as the ability for IT to ensure that the technical aspects of policies are included in the decision making process. And it must be IT that codifies those requirements into a policy that can be leveraged by the broker and ultimately the end-user to help make deployment decisions. Business users, when faced with requirements for web application firewalls in PCI-DSS, for example, or ensuring a default "deny all" policy on firewalls and routers, are unlikely able to evaluate public cloud offerings for ability to meet such requirements. That's the role of IT, and even wearing rainbow-colored cloud glasses can't eliminate the very real and important role IT has to play here. The role of IT may be changing, transforming, but it is no way being eliminated or decreasing in importance. In fact, given the nature of today's environments and threat landscape, the importance of IT in helping to determine deployment locations that at a minimum meet organizational and regulatory requirements is paramount to enabling business users to have more control over their own destiny, as it were. So while cloud brokers currently appear to be external services, often provided by SIs with a vested interest in cloud migration and the services they bring to the table, ultimately these beasts will become enterprise-deployed services capable of making policy-based decisions that include the technical details and requirements of application deployment along with the more businessy details such as costs. The role of IT will never really be eliminated. It will morph, it will transform, it will expand and contract over time. But business and operational regulations cannot be encapsulated into policies without IT. And for those applications that cannot be deployed into public environments without violating those policies, there needs to be a controlled, local environment into which they can be deployed. Related blogs and articles: The Social Cloud - now, with appetizers The Challenges of Cloud: Infrastructure Diaspora The Next IT Killer Is… Not SDN The Cloud Integration Stack Curing the Cloud Performance Arrhythmia F5 Friday: Avoiding the Operational Debt of Cloud The Half-Proxy Cloud Access Broker The Dynamic Data Center: Cloud's Overlooked Little Brother Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules334Views0likes0CommentsFedRAMP Ramps Up
Tomorrow June 6th, the Federal Risk and Authorization Management Program, the government’s cloud security assessment plan known as FedRAMP will begin accepting security certification applications from companies that provide software services and data storage through the cloud. On Monday, GSA issued a solicitation for cloud providers, both commercial and government, to apply for FedRAMP certification. FedRAMP is the result of government’s work address security concerns related to the growing practice of cloud computing and establishes a standardized approach to security assessment, authorizations and continuous monitoring for cloud services and products. By creating industry-wide security standards and focusing more on risk management, as opposed to strict compliance with reporting metrics, officials expect to improve data security as well as simplify the processes agencies use to purchase cloud services, according to Katie Lewin, director of the federal cloud computing program at the General Services Administration. As both the cloud and the government’s use of cloud services grew, officials found that there were many inconsistencies to requirements and approaches as each agency began to adopt the cloud. FedRAMP’s goal is to bring consistency to the process but also give cloud vendors a standard way of providing services to the government. And with the government’s cloud-first policy, which requires agencies to consider moving applications to the cloud as a first option for new IT projects, this should streamline the process of deploying to the cloud. This is an ‘approve once, and use many’ approach, reducing the cost and time required to conduct redundant, individual agency security assessment. Recently, the GSA released a list of nine accredited third-party assessment organizations—or 3PAOs—that will do the initial assessments and test the controls of providers per FedRAMP requirements. The 3PAOs will have an ongoing part in ensuring providers meet requirements. FedRAMP provides an overall checklist for handling risks associated with Web services that would have a limited, or serious impact on government operations if disrupted. Cloud providers must implement these security controls to be authorized to provide cloud services to federal agencies. The government will forbid federal agencies from using a cloud service provider unless the vendor can prove that a FedRAMP-accredited third-party organization has verified and validated the security controls. Once approved, the cloud vendor would not need to be ‘re-evaluated’ by every government entity that might be interested in their solution. There may be instances where additional controls are added by agencies to address specific needs. Independent, third-party auditors are tasked with testing each product/solution for compliance which is intended to save agencies from doing their own risk management assessment. Details of the auditing process are expected early next month but includes a System Security Plan that clarifies how the requirements of each security control will be met within a cloud computing environment. Within the plan, each control must detail the solutions being deployed such as devices, documents and processes; the responsibilities of providers and government customer to implement the plan; the timing of implementation; and how solution satisfies controls. A Security Assessment Plan details how each control implementation will be assessed and tested to ensure it meets the requirements and the Security Assessment Report explains the issues, findings, and recommendations from the security control assessments detailed in the security assessment plan. Ultimately, each provider must establish means of preventing unauthorized users from hacking the cloud service. The regulations allow the contractor to determine which elements of the cloud must be backed up and how frequently. Three backups are required, one available online. All government information stored on a provider's servers must be encrypted. When the data is in transit, providers must use a "hardened or alarmed carrier protective distribution system," which detects intrusions, if not using encryption. Since cloud services may span many geographic areas with various people in the mix, providers must develop measures to guard their operations against supply chain threats. Also, vendors must disclose all the services they outsource and obtain the board's approval to contract out services in the future. After receiving the initial applications, FedRAMP program officials will develop a queue order in which to review authorization packages. Officials will prioritize secure Infrastructure as a Service (IaaS) solutions, contract vehicles for commodity services, and shared services that align with the administration’s Cloud First policy. F5 has an iApp template for NIST Special Publication 800-53 which aims to make compliance with NIST Special Publication 800-53 easier for administrators of BIG-IPs. It does this by presenting a simplified list of configuration elements together in one place that are related to the security controls defined by the standard. This makes it easier for an administrator to configure a BIG-IP in a manner that complies with the organization's policies and procedures as defined by the standard. This iApp does not take any actions to make applications being serviced through a BIG-IP compliant with NIST Special Publication 800-53 but focuses on the configuration of the management capabilities of BIG-IP and not on the traffic passing through it. ps Resources: Cloud Security With FedRAMP CLOUD SECURITY ACCREDITATION PROGRAM TAKES FLIGHT FedRAMP comes fraught with challenges FedRAMP about to hit the streets FedRAMP takes applications for service providers Contractors dealt blanket cloud security specs FedRAMP includes 168 security controls New FedRAMP standards first step to secure cloud computing GSA to tighten oversight of conflict-of-interest rules for FedRAMP What does finalized FedRAMP plan mean for industry? New FedRAMP standards first step to secure cloud computing GSA reopens cloud email RFQ NIST, GSA setting up cloud validation process FedRAMP Security Controls Unveiled FedRAMP security requirements benchmark IT reform FedRAMP baseline controls released Federal officials launch FedRAMP304Views0likes0Comments