series-security-sidebar
15 TopicsSecurity Sidebar: Improving Your SSL Labs Test Grade
Encrypt everything. That's what Google Chairman Eric Schmidt recently said. His comments were in response to various surveillance efforts that he considered government overreach and censorship. His rationale...if you are going to spy on everything I send across the Internet, then I'll simply encrypt it all so you can't read it. Other companies like Facebook, Twitter, Yahoo, and many others have taken similar steps. In addition, Mark Nottingham (chairman of the group developing the new HTTP/2 protocol) said, "I believe the best way that we can meet the goal of increasing use of TLS on the Web is to encourage its use by only using HTTP/2 with https:// URIs." With all this encryption momentum from giants in the industry, the HTTPS path has been paved and everyone who wants to stay relevant will have to get on board. So, the world is moving to "encrypt everything" and you want to follow suit. Unfortunately, there are many different options to consider when implementing SSL on your web server. Wouldn't it be nice to just have a checkbox that said "click here for SSL implementation"? It's not that simple. Fortunately, there are many different web-based tools that allow you to score the effectiveness of your web server's SSL implementation. Many of these tools provide recommendations on how to improve your web server's security and make it stronger and more efficient. Some of these include Wormly, SSL Shopper, DigiCert, and GlobalSign to name a few. Some of these tools just give you basic certificate information while others dig a little deeper into performance and known vulnerability status. There's no magic formula or mandate that forces any of these tools to look at one thing over another, so they all test things a little bit differently. That said, the undisputed industry thought leader in this space is Qualys SSL Labs. Qualys does a great job of conducting a comprehensive inspection of the SSL implementation on your web server. Some may question the need for having a good grade on the SSL Labs test, but imagine a customer checking, for example, their bank website and finding a bad grade for SSL implementation. If my bank had a failing grade on SSL implementation, it would certainly get my attention and it might make me think twice about moving my money and my business elsewhere. Even though an organization may not totally agree with the way Qualys approaches web server testing, it's still important to understand their testing methodology so as to align SSL implementation practices with their recommendations. How does SSL Labs approach web server testing? They have a fairly short and easy to read SSL Server Rating Guide that outlines the exact methodology they use for testing. Their approach consists of 4 steps: Look at a certificate to verify that it's valid and trusted Inspect server configuration in three categories: Protocol support Key exchange support Cipher support Combine the category scores into an overall score (a score of zero in any category will push the overall score to zero), then calculate an overall letter grade Apply a series of rules to handle aspects of server configuration that cannot be expressed via numerical scoring The final letter grade is based on the following overall numerical score: Numerical Score Letter Grade >= 80 A >= 65 B >= 50 C >= 35 D >= 20 E < 20 F Who knew you could get an "E" grade?!? I'm pretty sure I've received every other letter grade on that scale at some point in my life, but never an E. By the looks of where it fits on the scale, I don't want to start now. One other note about the grading scale...in certain situations the standard A-F grades are not quite applicable and are out of scope. To handle this, SSL Labs has introduced the "M" grade (certificate name mismatch) and the "T" grade (site certificate is not trusted). So, when you are reviewing your score and you see the "M" or the "T" you don't have to wonder what happened with the scoring results. Anyway, let's quickly look at each of the 4 areas they test. Certificate Inspection Three certificate types are currently in use: domain-validated, organization-validated, and extended-validation (EV) certificates. SSL Labs only requires that a certificate be correct and does not go beyond that basic requirement. They do recommend EV certificates for higher-value web sites but they have no way of knowing the purpose of each web site so they simply check to make sure the site's certificate is valid and trusted. However, they do note some certificate issues that will immediately result in a zero score: Domain name mismatch Certificate not yet valid Certificate expired Use of a self-signed certificate Use of a certificate that is not trusted (unknown CA or some other validation error) Use of a revoked certificate Insecure certificate signature (MD2 or MD5) Insecure key Server Configuration The three criteria used for server configuration are protocol support (30% of grade), key exchange (30% of grade), and cipher strength (40% of grade). Protocol support is graded against the following criteria: Protocol Score SSL 2.0 0% SSL 3.0 80% TLS 1.0 90% TLS 1.1 95% TLS 1.2 100% They start with the score of the best protocol used on your web server and then add the score of the worst protocol and then divide the total by 2. This doesn't account for any protocols in between the best and worst on your site, but that's why it's important to understand how they calculate all this stuff. For example, if your site supports SSL 3.0, TLS 1.1, and TLS 1.2, your score would be (100 + 80) / 2 = 90. How would you increase that score? Well, if you continued support for TLS 1.1 and TLS 1.2 and dropped support for SSL 3.0, your score would move up to (100 + 95) / 2 = 97.5. Key exchange is graded against the following criteria: Key Exchange Score Weak key (Debian OpenSSL flaw) 0% Anonymous key exchange (no authentication) 0% Key or DH parameter strength < 512 bits 20% Exportable key exchange (limited to 512 bits) 40% Key or DH parameter strength < 1024 bits (e.g., 512) 40% Key or DH parameter strength < 2048 bits (e.g., 1024) 80% Key or DH parameter strength < 4096 bits (e.g., 2048) 90% Key or DH parameter strength >= 4096 bits (e.g., 4096) 100% Cipher strength is the final piece of the server configuration equation. Servers can support varying strengths of ciphers, so SSL Labs scores the cipher strength the same way they do the protocol strength...take the score of the strongest cipher, add the score of the weakest cipher, and divide by 2. The scores for each cipher are as follows: Cipher Strength Score 0 bits (no encryption) 0% < 128 bits (e.g., 40, 56) 20% < 256 bits (e.g., 128, 168) 80% >= 256 bits (e.g., 256) 100% Sample Web Server Let's say your web server has the following configuration: Valid and trusted certificate Protocol support for TLS 1.0 and TLS 1.1 RSA key with 2048 bit strength Cipher algorithm is AES/CBC with 256 bit strength In this case, you would score a 92.5 for protocol support, a 90 for key exchange, and a 100 for cipher strength. Protocol support accounts for 30% of the overall grade, so you multiply 92.5 by 30%. Key exchange is also 30% of the overall grade, and cipher strength is 40% of the overall grade. Using these values, you would score a (92.5 * 30%) + (90 * 30%) + (100 * 40%) = 94.75. Converting this numerical score to a Letter Grade would yield an overall "A" score. Congratulations! Important Things to Consider... SSL Labs periodically changes their grading criteria and methodology based on changes in technology. Here are some changes that they have published (updated Feb 2018): SSL 2.0 is not allowed (results in an automatic "F") Insecure renegotiation is not allowed (results in an automatic "F") Vulnerability to the BEAST attack caps the grade at B Vulnerability to the CRIME attack caps the grade at C (previously capped at "B" but changed in the May 2015 test version) The test results no longer show the numerical score (0-100) because they realized that the letter grade (A-F) is more useful (they still calculate the numerical score...they just don't show it to you) No longer require server-side mitigation for the BEAST attack Support for TLS 1.2 is now required to get an A grade. Without it, the grade is capped at a B If vulnerable to the Heartbleed attack, automatic "F" grade If vulnerable to the OpenSSL CVE-2014-0224 vulnerability, automatic"F" grade Keys below 2048 bits (e.g., 1024) are now considered weak, and the grade is capped at a B Keys under 1024 bits are now considered insecure (results in an automatic "F") Warnings have been introduced as part of the rating criteria. In most cases, warnings are about issues that do not yet affect the grade, but likely will in the future. Server administrators are advised to correct the warnings as soon as possible. Some examples are: Warning: RC4 is used with TLS 1.1 or newer protocol. Because RC4 is weak, the only reason to use it is to mitigate the BEAST attack. For some, BEAST is still a threat. Because TLS 1.1 and newer are not vulnerable to BEAST, there is no reason to use RC4 with them Warning: No support for Forward Secrecy Warning: Secure renegotiation is not supported Grade A- is introduced for servers with generally good configuration that have one ore more warnings Grade A+ is introduced for servers with exceptional configurations. At the moment, this grade is awarded to servers with good configuration, no warnings, and HTTP Strict Transport Security support with a max-age of at least 6 months MD5 certificate signatures are now considered insecure (results in an automatic "F") Clarified that insecure certificate signatures affect the certificate score. This has always been the case for MD2 Clarified that the strength of DHE and ECDHE parameters affects key exchange scoring. This has always been the case, but previous revisions of the text were not clear about it An A+ score is not awarded to servers that use SHA1 certificates Overall grade is capped at C if vulnerable to POODLE attack An A+ score is not awarded to servers that don’t support TLS_FALLBACK_SCSV Overall grade is capped at "B" if SSL 3 is supported Overall grade is capped at "B" if RC4 is supported Overall grade is capped at "B" if the certificate chain is incomplete Servers that have SSL 3.0 as their best protocol automatically get an "F" If using weak DH parameters (less than 1024bits), grade is automatically set to "F" If using weak DH parameters (less than 2048 bits), grade capped at "B" If using export cipher suites, grade is automatically set to "F" If vulnerable to CRIME attack, best grade is capped at "C" (was "B" prior to May 2015 test version) Cap grade at "C" if RC4 is used with TLS 1.1+ Cap grade at "C" if not supporting TLS 1.2 Fail servers that support only RC4 suites Detect when RSA exponent 1 is used. This is insecure and gets an automatic "F" Hosts that have HPKP issues can't get an A+ grade Servers vulnerable to DROWN attack get an automatic "F" grade If vulnerable to CVE-2016-2107 (Padding oracle in AES-NI CBC MAC check), grade is an automatic "F" Introduce a penalty (grade capped at C) for using 3DES (and other ciphers with block sizes of 64 bits) with TLS 1.1+ SHA1 certificates are nolonger trusted; results in a "T" grade Introduced an explicit penalty for using cipher suites weaker than 112 bits. This was necessary to address a flaw in the SSL Labs grading algorithm that didn't sufficiently penalize these weak suites. WoSign/StartCom certificates are distrusted andwill result in a "T"grade If vulnerable to Ticketbleed (CVE-2016-9244), the grade is an automatic "F" In addition to these updates, SSL Labs is planning to add more criteria changes in March, 2018. These include: Penalty for not using forward secrecy (grade capped at "B"). Not using Forward Secrecy is currently a warning, but will soon affect the actual grade of your web server. Theywill not penalize sites that use suites without forward secrecy provided they are never negotiated with clients that can do better. Penalty for not using AEAD suites (grade capped at "B").Your site should use secure cipher suites, andAEAD is the only encryption approach without any known weaknesses. Also, the new TLS 1.3 protocol supports only AEAD suites.In their new grading criteria, websites will be required to use AEAD suites to get an"A". However, as with forward secrecy, theywill not penalize sites if they continue to use non-AEAD suites provided AEAD suites are negotiated with clients that support them. Penalty forReturn Of Bleichenbacher Oracle Threat (ROBOT) vulnerability (automatic "F" grade). ROBOTis an attack model based on Daniel Bleichenbacher chosen-ciphertext attack. Bleichenbacher discovered an adaptive-chosen ciphertext attack against protocols using RSA, he demonstrated the ability to perform RSA private-key operations. Researchers have been able to exploit the same vulnerability with small variations to the Bleichenbacher attack. The ROBOT vulnerability was a warning in the past, but will now be used in the grading algorithm. Note: F5 has provided mitigation steps for the ROBOT vulnerability in article K21905460: BIG-IP SSL vulnerability (ROBOT) CVE-2017-6168. Penalty for using Symantec Certificates (grade of "T" will be given). Starting March 1, 2018, SSL Labs will give “T” grade for Symantec certificates issued before June 2016. Hopefully you can start to see how your overall grade can change based on different options and configurations. As SSL Labs changes their grading criteria and testing methodology (i.e. will support for HTTP 2.0 be needed for an "A" grade in the future?) you should stay aware of what they are doing and how your web site is affected by their changes. It's important to check back periodically to see how your grade looks...your customers are certainly checking on you! After all, if you're gonna "encrypt everything" you might as well encrypt it correctly. Knowing all this, you can more easily configure your web server to go from this grade... To this grade... Here's to great web site configurations, effective security, and A+ grades!2.7KViews1like11CommentsSecurity Sidebar: My Browser Has No Idea Your Certificate Was Just Revoked
Encryption is a fundamental reality on the Internet today. Most sites use SSL/TLS for encryption, and you can identify these sites by the https:// in the address bar of your browser. The Internet security service company Netcraft has been tracking SSL usage for over 20 years now, and their most recent data shows that there are now more than one thousand times more certificates on the web today than in 1996. DevCentral is no exception to this SSL phenomenon…go ahead, check your browser’s address bar and notice the address for this article (or anything else on DevCentral for that matter) will start with https:// instead of plain old http://. This SSL/TLS encryption provides a secure means of communication between your browser and the web server. In order to make all this encryption happen, encryption keys are shared between the web server and your browser. Encryption key exchange gets very complicated and this article is not meant to explain all the details of encryption key exchange mechanisms, but from a very high-level perspective, it’s fair to say that these keys are shared by using the web server’s SSL/TLS certificate. When a user visits a secure website, an encryption key exchange process takes place, and the resulting encryption keys are used to encrypt all communication between that user and the web server. A certificate is a digital file that holds several pieces of information related to a particular website. One of the pieces of information it holds is the public portion of the encryption key used to encrypt all the communications to/from the web server. Another piece of information it holds is the effective dates of the certificate. After all, these things are only good for a finite period of time (typically 1-2 years). In a perfect world, a web server would be issued a certificate and that certificate would never get compromised and it would be used for the full duration of the life of the certificate. But we don’t live in a perfect world. The reality is that certificates get compromised all the time, and when that happens, the certificate needs to be revoked. Typically when a web server certificate is revoked, a new certificate is created and used in place of the old, revoked certificate. But, how does a user know that a certificate has been revoked? The Magic of CRL and OCSP Here’s how it works…when a user visits a secure website, the certificate is sent from the website to the user’s browser (Chrome, Firefox, Internet Explorer, Safari, etc). Because certificate sharing creates significant computational overhead, many browsers simply store the certificate information from a previously-visited website in their cache so they don’t have to keep asking for a new certificate each time they visit that website. This is nice because it significantly speeds up the user experience for loading that particular secure website, but it also presents a problem when the certificate is no longer valid. In order to check that a given certificate is still valid, the concept of a Certificate Revocation List (CRL) was introduced. The CRL is a digital file created by a Certification Authority (the organization that creates and distributes certificates) that contains the serial number for each certificate that has been revoked by that CA. In order for a browser to check that a given certificate is still valid, the CRL must be downloaded and the serial number for the website you are visiting must be checked against the CRL to ensure the certificate is not revoked. If the certificate is not revoked, all is good and the browser displays the page. But if the certificate has been revoked, the browser should display a warning page that tells you the certificate has been revoked. Some browsers will allow you to continue to the page anyway and others won’t…it just depends on the browser. The CRL check method is computationally expensive because the browser has to download the CRL every time it needs to check a certificate (which happens very frequently), and it also has to search through the CRL for a match of the serial number of the certificate it is using (CRL files can get extremely large). In order to avoid frequent CRL downloads and searches, some browsers will cache the CRL for a given period of time and simply check the cached CRL instead of downloading a new one each time. This helps speed things up, but what if the CRL changed since the last time it was cached in your browser? This situation became a big enough problem that a new, faster solution was introduced. The Online Certificate Status Protocol (OCSP) was developed in 1999 and it is a solution that queries an online database of serial numbers for revoked certificates. Rather than host CRL files, CAs can instead setup an OCSP server and include its URL in issued certificates. Clients that support OCSP can then query the database to see if a given certificate has been revoked, instead of downloading the entire CRL file. OCSP is a much more efficient solution than the CRL method. Here’s a quick view of the certificate issued to f5.com. Notice it was issued by the Entrust Certification Authority, therefore Entrust is the one who manages the CRL file for all their revoked certificates and they also manage the list of serial numbers used in the OCSP queries. The picture on the left is the detailed view of the f5.com certificate with the CRL location listed. This is the URL where the browser can go download the CRL from Entrust. Keep in mind that this CRL is just the one managed by Entrust. I took the liberty of visiting the URL where the CRL is located, and the details for the CRL are shown in the picture on the right. Notice that the CRL is simply a big list of serial numbers with revocation dates and reasons for revocation. The following screenshot is the OCSP portion of the f5.com certificate. Notice the URL listed in the details section at the bottom of the screenshot. This is the URL that the browser will visit in order to check the current certificate serial number against the database of revoked serial numbers managed by Entrust. So Many Certificates, So Little Time… Now that we understand how a browser checks to see if a certificate is still valid, let’s take a little deeper look at the different types of certificates available today. There are three types of certificates you can purchase today: Domain Validation (DV), Organization Validation (OV), and Extended Validation (EV). The DV certificate is the cheapest and most popular type of certificate. Each CA makes up its own rules as to what they require from an organization before they will issue the DV certificate. Typically it’s a very simple (and many times, automated) process like the CA sending you a file and you placing that file on the webserver at the domain in question…just something simple to let the CA know that they are issuing a certificate for a given domain to the actual owner. There are, of course, many different ways to hack this process and get a DV certificate for a domain that you don’t own. But, that’s a topic for another day. The OV certificate is more expensive than the DV, and it takes things a bit further with respect to the CA checking that the requesting organization actually owns the domain name. There are some other organizational vetting procedures that a CA might take in order to more fully understand that the requestor is the owner. In addition to what they would do for a DV certificate, maybe they’ll make a few reference phone calls and do some simple background checks. But, then again, it’s totally up to the individual CA to develop their own procedures for vetting an organization prior to issuing an OV certificate. The EV certificate is the most expensive, and it requires the most amount of background checking and validation from a CA before it is approved. In fact, the process for vetting an organization prior to issuing an EV certificate is governed by the CA/Browser Forum. This organization has developed a robust list of requirements that must be met before a CA can issue an EV certificate. Companies that want the EV certificate do so because they want to show the world that they are serious about the security of their online presence. The reality is that users visit these secure sites via an Internet Browser (Chrome, Firefox, Internet Explorer, etc), so it’s interesting to see how these different browsers handle certificate revocation and also how they treat the different kinds of certificates. The top 3 browsers on the Internet today are Google Chrome (70.4%), Mozilla Firefox (17.5%), and Microsoft Internet Explorer (5.8%). Almost 94% of all Internet traffic is displayed by one of these three browsers. Let’s take a quick minute to see how these check for certificate revocation. Google Chrome Google Chrome is by far the most popular browser today. When it comes to certificate revocation checking, Chrome blazes its own trail and does its own thing. The CRL and OCSP methods of certificate revocation checking are the industry standards, but Google isn’t standard in this space. It has created what’s known as a CRLSet to check for certificate status. A CRLSet is Google’s own list of revoked certificates that it compiles and updates when it crawls the CRLs from the major CAs around the world. Instead of checking an OCSP responder or a CRL, Google Chrome simply checks its own CRLSet for certificate status when visiting a secure website. Google claims that this is faster and safer for the user than the traditional CRL or OCSP methods. Some people think this approach is good because it’s faster to check a locally stored list than using the traditional methods and you also don’t have to worry about OCSP responder or CRL distribution point availability. But others are skeptical because the CRLSet is only comprised of certificates that Google deems worthy to include. What if the certificate you need to check isn’t on the CRLSet list? Also, the CRLSet file size is explicitly limited to 250KB. If something happens and lots of certificates are suddenly revoked causing the CRLSet to get bigger than 250KB (Heartbleed for example), then certificates are deleted from the CRLSet so that it stays at the 250KB max size. What if the certificate status you need gets deleted from the CRLSet during one of these bloat sessions? Mozilla Firefox Firefox allows you to check for revoked certificates via the OCSP method, but it doesn’t use the CRL at all. If a given certificate includes an OCSP address in the Authority Information Access (AIA) portion of the certificate, then Firefox will query the OCSP server to make sure the certificate is not revoked. If the OCSP server isn’t available or if the OCSP address is not present in the AIA field of the certificate, then Firefox won’t check revocation status and will present an error message (which the user can click through to proceed anyway). Check out the screenshot below for the settings in Firefox (version 46.0.1). Microsoft Internet Explorer Interestingly, Microsoft lE conducts the most comprehensive certificate revocation check of these leading browsers. The default setting is like Firefox…it checks the OCSP responder if the address is present in the AIA field of the certificate. But, if the OCSP server is not available or if the OCSP address is not present, it will then check the CRL (it checks the CRL loaded in cache if possible so that it doesn’t have to continually download a large CRL file). If neither of these is available, it will present a warning page and give the user an option of either proceeding forward with an unknown certificate status or closing out of the browser. The following screenshot shows the settings for certificate revocation checks in IE: You can see that each browser handles certificate revocation a little different than the next. So, it’s entirely possible that a revoked certificate could fall through the cracks if Google decided not to add it to their CRLSet, Firefox couldn’t contact the OCSP server, and Internet Explorer had an outdated version of the CRL stored in cache. Be careful out there...899Views0likes2CommentsSecurity Sidebar: Did Quantum Computing Kill Encryption?
Google recently published results of its newest quantum computing capability with a chip called "Sycamore" and the results are pretty impressive. Classic computer operations rely on a binary 1 or 0 to execute operations. But, quantum computing can take advantage of numbers between 1 and 0 at the same time, thus greatly increasing its computing speed and power. Of course, this quantum computing thing is not easy. Giant companies like Google, IBM, and others have been working hard with large budgets for a long time to figure this thing out. Google's Sycamore Chip In its public release, Google showed that the Sycamore chip could execute calculations that are not possible with classical computers. The specific calculations that the Sycamore chip performed were related to complex random number generation. The Sycamore chip performed the calculations in about 200 seconds. In order to show the significance of how fast this was, the team also ran a simpler version of this same test on the world's fastest supercomputer (not quantum computer) at the Oak Ridge National Laboratory. After the supercomputer completed this simpler task, the team was able to extrapolate the amount of time the supercomputer would have taken to complete the more complex task that Sycamore completed. The team suggested it would have taken the supercomputer about 10,000 years to complete the same task that Sycamore completed in 200 seconds! Google's Quantum Computer To be fair, the task of verifying complex random number generation doesn't necessarily have wide application in today's world. But, that was never really the point of this experiment. The point was to show the potential that quantum computing can have in our world as the technology matures. Some experts have compared this breakthrough to Sputnik in space or the Wright Brothers first airplane flight...while these events arguably didn't have super-impressive results, they certainly paved the way for what would be very significant technology in the future. So, we will see where quantum computing takes us as an industry, but it's certainly proving to show that computing power is getting stronger and faster. Encryption So, how would this affect encryption? Encryption is fundamental to Internet privacy and security. At its core, encryption requires a secret key that the sender and receiver both have in order to encrypt and decrypt the information they send back and forth. Most encryption algorithms used today are widely known, and the developers show exactly how they work and how they were designed. While the security of the encryption is certainly based on its design and mathematical strength, it is also based on the fact that both the sender and receiver have a key that is kept secret. If an attacker steals the key, then game over. The strength of the key is based on the mathematical likelihood that someone (or something) could (or could not) figure it out. If you have followed computer encryption for any length of time, you've no doubt noticed that certain encryption key strengths are no longer recommended. This doesn't automatically mean that the encryption algorithm is not good, it just means the key size needs to be larger so that a computer will take longer figuring out the key. As computer processing power has grown over the years, the need for larger key sizes has also grown. For example, the RSA encryption algorithm (used for server authentication and key exchange) has been tested over the years to see how long it would take a computer to crack the secret key. As you may know, RSA is built on the foundation of prime number factoring where two large prime numbers are multiplied together to get a common value that is shared between the client and server. If a computer could take this large number and figure out the two prime numbers that were multiplied together, then it would know the secret key value. So, the whole foundation of security for RSA encryption is based on the idea that it is very difficult to figure out those two numbers that were multiplied together to get that big shared value. The idea with key size in RSA encryption is that the larger the two prime numbers are, the harder it is to figure them out. Many people have tested RSA over the years, and one group of researchers discussed some results from one of their tests. Several years ago, this team tested a 155-digit number and worked to factor it down. It took them nine years to figure out the factors (and thus the secret key). More recently, they tested a 200-digit number with more modern computing power and it took them about 18 months to crack it. A while later (with still faster computers), they tried a 307-digit number and they factored it down even faster. The point is, as modern computing power gets faster, the time it takes to crack an encryption key gets shorter. A typical RSA implementation today uses 1024-bit key size. Some applications will use 2048-bit key sizes, but the larger the key size, the more load it puts on the client and server, and it slows the web application down. So, there's a tension between strong (large) key size and application speed. Now that Google has shown the ability to use quantum computing to run calculations in 200 seconds that would take today's fastest supercomputers 10,000 years, it's not hard to imagine that an encryption key like the one used in RSA can be cracked in a matter of seconds. If you know a mathematician who designs computer encryption algorithms, tell them that the Internet might be looking for some new stuff pretty soon...600Views1like2CommentsSecurity Sidebar: I Can See Your Browsing History
Is there any expectation of browsing privacy on the Internet any more? Well, there shouldn't be. A few years ago, Internet browsers were widely known to have vulnerabilities that allowed websites the ability to search a user's browsing history. Websites could use a combination of JavaScript and Cascading Style Sheet (CSS) features to figure out what websites you visited. In 2010, researchers at the University of California at San Diego found that several pornographic sites would search a user's browser history to see if the user had visited other pornographic sites. But it wasn't just the porn industry viewing user habits. These same researchers found several news sites, finance sites, and sports sites doing the same thing. Over time, browser security updates were supposed to have fixed these vulnerabilities...and they did for a while. But recently, security researchers have uncovered new vulnerabilities that allow this behavior once again. There's a new attack that uses the requestAnimationFrame function to determine how long it takes a browser to render a webpage. Simply stated, if the page renders quickly, the user has probably visited it before. You get the idea. There are ways to work around these browser history vulnerabilities. The primary workaround is to make sure you never have any browser history. You can clear all your history when you close your browser (in fact, you can do this automatically on most browsers). While this might keep someone from knowing your browsing history, it can also prove to be very inconvenient. After all, if you clear your history...well, you lose your history. Let's be honest, it's nice to have your browser remember the sites you've visited. What a pain to reestablish your user identity on all the websites you like to hit, right? So why is your browsing history so interesting? Many companies want to target you with ads and other marketing initiatives based on your browsing habits. They also want to sell your browsing habits to other interested parties. I could also talk about how the government might use this information to spy on help you, but I'll refrain for now. Allan Friedman, a research scientist at George Washington University, recently said that websites are very likely searching your browser history to determine the selling price for a particular item. They might offer you a better deal if they find that you've been shopping their competitors for the same item. Likewise, they might charge more if they find nothing related to said purchase in your browser history. Justin Brookman, a director at the Center for Democracy and Technology, echoed this sentiment when he said browsing history could come at a cost. For example, if you have been shopping on a high-end retail site, you will likely see advertisements for higher priced businesses displayed on your browser. Another way this could affect your daily life is in the area of smartphone geolocation. Your smartphone will broadcast location information every few seconds, and businesses can use this information to send marketing emails (coupons, daily deals, etc) when they know you are close by. Currently, there is no federal law that prohibits this behavior. As long as businesses aren't lying about what they are doing, it's perfectly legal. Don't be surprised when you conveniently get a "check out our great deals" email from the store you just passed by. Ours is a really cool, technology-filled world...and it's kind of scary at the same time.547Views0likes1CommentSecurity Sidebar: What's Real, and What's Fake?
Generative Adversarial Networks (GANs) are deep neural net architectures comprised of two networks, pitting one against the other (thus the “adversarial”). These networks can learn to mimic any distribution of data, and they can take input from many different sources in order to create things that are extremely similar to real-world things. Things like images, music, speech, prose, etc. One website uses GANs to study thousands of human faces and then generate faces of people who do not exist. Fake Pictures The website This Person Does Not Exist uses GANs that study thousands of human faces and then generate faces of people who do not exist. Do you know the girl shown below? No, you don't. She doesn't exist. The Generative Network works alongside a Discriminative Network to determine how authentic the picture actually is. In effect, the generative network "generates" the picture (based on real life images) and then the discriminative network provides feedback on whether the picture actually looks real or fake. Here's a cool picture of the process of how these GANs study real picture inputs and then generate fake pictures. On one hand, this is cool and fascinating stuff. On the other, it can get pretty freaky pretty fast. It also makes me think about the picture that my buddy showed me of his new "girlfriend"...I'm gonna need to actually meet the girl to confirm she's a real person. Fake Videos Related to all this, new advancements are coming in the area of artificial intelligence and fake videos. While video manipulation has been around for a relatively long time, researchers at Samsung have recently been able to take a single picture and turn it into a fake video of that person. We all know Miss Mona Lisa, right? Well, have you ever seen her have a conversation. No, because video wasn't around back then. Well, now you can... When you add together the fake images from these GANs and the ability to turn a single picture into a video of that person, you get some crazy possibilities. Maybe the video evidence that has always been so trustworthy in a court room is suddenly not. Maybe your favorite politician gives a private speech on a controversial topic...or maybe they don't? The possibilities can get pretty extensive. In times like these, remember the fateful words of Abraham Lincoln (16th President of the United States): "Never believe everything you see on the Internet."472Views3likes0CommentsSecurity Sidebar: My Printer Did What?!?
Remember back in the good old days when a printer was just a printer? Well, that isn't reality any more. Printers have morphed from basic dot-matrix machines connected via parallel cable to fully-networked, multi-function devices on your network. Gone are the days of simply plugging in a USB to your work computer and printing on your own personal device. Now, it seems that everyone is using a complex, muti-function device that can print, scan, email, copy, fax, etc. And why not, right? If you can do all that stuff with one super-cool machine, there's no need to have tons of personal printers, scanners, copiers all over the place. But as is the case with many things, the more functionality you introduce, the more vulnerabilities you expose. Companies purchase these high-end, expensive devices for several reasons. Like I mentioned above, the expensive ones are fully networked and offer lots of features that would otherwise require several individual machines. And, the higher-end machines typically offer the best quality in printing, copying, and scanning. In fact, if a company is only interested in high quality print, that company will likely be forced to purchase the fully networked device whether they need all the extra bells and whistles or not. I recently watched a video presentation from a BSides event in Cleveland where Deral Heiland discussed different ways to hack these high-end printers. Deral did a great job, and I wanted to highlight one of the printer exploits he discussed...known as the LDAP Pass-Back-Attack. The first step in this LDAP Pass-Back-Attack takes advantage of the fact that most printers still have the default settings for the admin username and password. There are several ways to gain access to the password if it has been changed, but as Deral mentions in his discussion, most printers use the default password. Once you log in to the printer, you should be able to change the IP address and service port, and many times you can change the authentication security so that the LDAP server will respond with passwords in plain text. Using an intercepting proxy like Burp Suite, you can capture and manipulate the LDAP lookup request data. When the LDAP server receives the manipulated lookup request, it will respond in plain text to the attacker's IP address on the newly-altered port number. The attacker can capture all the credentials using a tool like Netcat listener. Check it all out in the diagram below: Once you have the LDAP credentials, you can test them on a legitimate LDAP server in the target network. If the LDAP server happens to be a Domain Controller, the attacker might just have himself some domain admin rights! What's more, many of these multi-function printers actually store user passwords in their address books, so an attacker could use this same attack to gain access to several user accounts directly from the printer. Deral mentioned in his presentation that, in 2010, he gained access to Active Directory user accounts less than 10% of the time, and he rarely gained domain admin credentials using this attack. But, in 2014, he could gain access to Active Directory user accounts about 50% of the time, and this led to domain admin access almost 30% of the time. It seems we have stepped backward while stepping forward. In order to guard against this attack, it's recommended to turn off automatic firmware upgrades for your multi-function devices, isolate printers by department, don't allow printers to have Internet access, and for crying out loud, change the default password!418Views0likes3CommentsSecurity Sidebar: Spear Phishing Still Happens…A Personal Story
I’ve been doing this security thing for many years now. In a list of current Internet patrons, I would include myself in the category of “those on high alert” for fraudulent and nefarious activity. I’ve seen many general phishing emails as well as targeted spear phishing emails, and I often wonder why these things are still so prevalent today. The answer, of course, is this: they still work! People still open those attachments and click on those links. If you are an attacker, why would you stop using a phishing attack vector that totally still works?? I was perusing my inbox the other day and found an interesting email that made me pause for a second. It was a PayPal receipt for an air compressor from Sears. I would have normally deleted this one right away, but this time was different. You see, my birthday was coming up soon, and my wife said she got me a present but, of course, wouldn’t tell me what it was. When I got this email from PayPal, I thought I had inadvertently stumbled upon my birthday surprise. I still wondered if this receipt was legit because I never really expressed interest in an air compressor, but who knows, maybe she thought outside the box and wanted to get me this thing. So, I just left it there and figured I would act surprised when I opened an air compressor on my birthday. Fast forward a couple of weeks…my birthday came and went, and guess what I got for my birthday?? Not an air compressor. Of course, this made me even more suspicious about the PayPal email. I went back and looked at it a little closer and found several tell-tale signs of a pretty good spear phishing email. As with any good spear phishing email, many aspects of it looked extremely legitimate, but some things were out of place. Here’s the email sitting in my inbox: It looked decent enough at first glance, and I didn’t have a compelling reason to question this purchase given the situation I described above. However, after looking a little closer, I noticed several things wrong with this email. The first is that is was sent from a “PayPal” account, but the email address had nothing to do with PayPal. Instead it was “siguppyouracco@nominet.uk”. Check out the screenshot below: Next, I noticed that it was sent from a Sears store in Crawfordsville, Indiana based on the details of the shipping info. A quick Google Maps search of that address showed me this street view: It’s a store that sells home improvement items, but it’s definitely not Sears! At this point I’m seriously questioning the validity of this email. I noticed a few other things as well…the date stamp on the e-receipt is in YY/MM/DD format and I doubt it would be that way coming from a store in Indiana. Also, I noticed that the dollar sign on the item price is shown after the number…it should have been shown in front of the number. Finally, though, I noticed at the top of the email that it offered a chance to “request a cancellation” of my payment if I didn’t recognize it. How wonderfully considerate of them!! I didn’t click on the link, but I was curious to see where the link would have sent me. Who wants to bet that it wasn’t a PayPal site for payment cancellation? I hovered over the link, and I noticed that it sends you to: www.redcross.gm/images/remember/html/us2.htm. Of course, the entire purpose of this spear phishing email was to get me to click on that link. I suspect the site found at that link has some not-so-nice malware that would have loaded on my machine automatically. These spear phishers had no idea that it was my birthday and that my wife was getting me a surprise gift…they just got lucky that everything lined up and I gave this particular email a little more attention than I normally would have. Spear phishing is still a very viable form of malware distribution, so be careful before you open those attachments and click on those links! Last thing…my actual birthday present was a surprise skydiving trip. I’ve always wanted to skydive, and now I can say I’ve done it! Here’s a little video proof that I went flying through the air...it was WAY better than a portable air compressor from Sears:399Views0likes0CommentsSecurity Sidebar: A Point-Counterpoint Discussion On WAF Effectiveness
Web Application Firewalls (WAFs) are extremely popular today, and they provide critical protection for web applications. But some experts have recently postulated that WAFs are not really as effective as many people think they are. One recent article listed five ways that WAF protection fails. Let’s do a point / counterpoint discussion with each of these five “WAF failures”. Point #1: WAFs fail because of negligent deployment, lack of skills and different risk mitigation priorities. Many companies simply don’t have competent technical personnel to maintain and support WAF configuration on a daily basis. Counterpoint #1: It’s true that most companies don’t have the technical expertise to maintain and support a WAF configuration. But now they don’t have to. F5 offers the Silverline cloud-based platform that provides WAF protection for your web applications. Along with the WAF protection, Silverline also provides the technical expertise of our highly specialized F5 Security Operations Center (SOC) where teams of security professionals configure and maintain your WAF for you. See, now you don’t really have to know about WAF configurations and support…the F5 team will do it for you! Point #2: WAFs fail because they are deployed only for compliance purposes. Midsize and small companies frequently install WAFs just to satisfy a compliance requirement. They don’t really care about practical security, and obviously won’t care about maintaining their WAF. Counterpoint #2: While it’s true that some companies only deploy a WAF to satisfy certain compliance mandates (i.e. HIPAA, PCI-DSS), they can now use the WAF for the purposes it was designed for. After all, why go through the expense and effort of buying and deploying a WAF just to say you have one? Why not turn it on and use it to protect your web applications? Maybe at this point in the discussion you find yourself back at point #1…at which time I would focus your attention to counterpoint #1. Let F5 Silverline and F5 SOC do it for you! Point #3: WAFs fail because of the complicated diversity of constantly evolving web applications. Today almost every company uses in-house or customized web applications, developed in different programming languages, frameworks and platforms. It’s still common to see CGI scripts from the 90s in pair with complex AJAX web applications using third-party APIs and web services in the cloud. Counterpoint #3: It’s true that we live in a complex web application world. And, the crux of this “WAF failure” point is that things are just too complex and dynamic to keep up with. But fear not! F5 Silverline services gives you the expertise of our team of security professionals who understand the complexities of today’s web application environment. Our team will build custom security policies that will protect your ever-changing web applications. Whether you deploy a cloud-based WAF service or you choose to keep it on premises (or both), you can rest assured that our team will provide the expertise needed to keep your applications secure. Point #4: WAFs fail because business priorities dominate cybersecurity. It’s almost unavoidable that your WAF will cause some false-positives by blocking legitimate website visitors. Counterpoint #4: The fact that your WAF produces a false positive is certainly not reason enough to completely turn it off. Rather, you should fine tune and test the thing to stop producing false positives. Of course, this gets back to point #1 where you don’t have the technical expertise to stop these pesky false positives. And, of course, I focus your attention again on counterpoint #1 where the F5 SOC team can configure and fine tune all your security policies for you! Point #5: WAFs fail because of their inability to protect against advanced web attacks. By design, a WAF cannot mitigate unknown application logic vulnerabilities, or vulnerabilities that require a thorough understanding of application's business logic. Few innovators try to use an incremental ruleset hardening in pair with IP reputation, machine learning and behavioral white-listing to defend against such vulnerabilities. Counterpoint #5: Advanced web attacks are certainly a serious threat for any company today. It’s important to choose a WAF that is powerful and flexible enough to handle these advanced attacks. The F5 Application Security Manager (ASM) allows organizations to gain the flexibility they need to deploy WAF services close to apps to protect them wherever they reside—within a virtual software-defined data center (SDDC), managed cloud service environment, public cloud, or traditional data center. The ASM also utilizes the power of F5’s IP Intelligence where malicious users are blocked based on their reputation score that is computed from multiple sources across the globe. By identifying IP addresses and security categories associated with malicious activity, the IP Intelligence service can incorporate dynamic lists of threatening IP addresses into the ASM, adding context to policy decisions. The IP Intelligence service reduces risk and increases data center efficiency by eliminating the effort to process bad traffic. ASM users also benefit from an extensive database of attack signatures, dynamic attack signature updates, DAST integration, and the flexibility of F5 iRules scripting for customization and extensibility. The ASM also has whitelisting capabilities where known good IP addresses are always allowed access to your web applications. WAFs remain a critical and strategic point of control for defending your web applications. But, as noted in the points above, WAFs must be deployed properly in order to achieve the full protection you require. If you find yourself in a position where you need a WAF (don’t we all??) but you don’t have the expertise or resources to configure and maintain the WAF properly, take a look at F5 Silverline…it might be just the solution you need!325Views0likes0CommentsSecurity Sidebar: Will The Real “Satoshi Nakamoto” Please Stand Up?
Some people love the anonymity that the Internet offers. You can lurk in the shadows using a random pseudonym, and if you are careful enough it’s likely that no one will ever know who you are. Back in 2008, an inventor named “Satoshi Nakamoto” created a peer-to-peer electronic cash system known as Bitcoin. The system was first used in 2009, and it has steadily gained popularity ever since. Bitcoin is fascinating because it is a completely legitimate form of currency but is not backed by any central monetary authority. Conventional currency is issued by a central bank and backed by something…maybe gold or silver or something similar. Once upon a time, the United States dollar was backed by a huge cache of gold predominantly stored in a very secure facility at Fort Knox. Bitcoin is not backed by anything and not issued by any central bank. Instead, it is comprised of a peer-to-peer network made up of its user’s computers. These computers use their power to solve complex mathematic problems (known as “mining”) and the complexity of these problems grows over time. Because the difficulty of these problems grows over time, the number of Bitcoins allowed in circulation is naturally limited. When a new Bitcoin is mined, all the computers in the network have to agree on the newly mined Bitcoin. So, the power of the value of Bitcoins relies upon the fact that all the users in the Bitcoin network are invested in this process and won’t stand for any nefarious activity that would devalue the hard work they used to generate their own Bitcoins. As of this article, one single Bitcoin is worth about $450.00. Here’s a quick chart that shows the value of Bitcoins over time. When Bitcoin was released, most everything about it was made completely public. The code, the protocol, the processes…everything. Everything except the identity of Satoshi Nakamoto. Of course, this mysterious identity has prompted people to want to know the man…or woman…or group of people…behind the genius system that is Bitcoin. And for seven years, no one has known. Back in 2014, a man really named Satoshi Nakamoto was famously accused of being the inventor of Bitcoin, but he adamantly denied this charge. Many other people have speculated as to the actual identity of Nakamoto, but no one really knows. Here’s a little list of people I found who are thought to be the real Satoshi Nakamoto: Michael Clear, a graduate cryptography student at Dublin's Trinity College Neal King, Vladimir Oksman, and Charles Bry Martii Malmi, a developer living in Finland Jed McCaleb, a lover of Japanese culture and resident of Japan Donal O'Mahony and Michael Peirce, computer scientists Professor Shinichi Mochizuki, Japanese mathematician Dorian S Nakamoto, a Japanese man residing in California Hal Finney, developer Michael Weber, develper Wei Dai, developer Nick Szabo, technical writer Among those included on the “who is the real Satoshi Nakamoto” list was an Australian entrepreneur named Craig S Wright. Wright had a fairly plausible resume when it came to associating his name with Nakamoto. He had many emails, transcripts, and other documents that defended the claim that his name should be on the Nakamoto list. How do we know he had all this stuff? Someone hacked into his business email account and found it all, of course! The problem is…Wright never said he was Nakamoto. In fact, ALL these people on the Nakamoto list have either outright denied that they are Nakamoto or have never come forward and claimed they are him. That is…until today. Craig S. Wright contacted the BBC, the Economist, and GQ to identify himself as the real Satoshi Nakamoto. At a meeting in London, Wright met with these three media organizations and also invited several prominent Bitcoinn developers and scientists. He used cryptographic proofs to show that he was, in fact, the father of Bitcoin. Now, he plans to move one of the "Satoshi Nakamoto" Bitcoins to further prove he is Nakamoto. The incredible transparency of Bitcoin transfers will make this move a pretty rock-solid proof that he really is who he says. He claims that he came forward because some of the people he cares about deeply were being falsely accused of many malicious things related to the Nakamoto identity. He finally came to the realization that he needed to prove his identity to save his friends from any more of these attacks. Let’s admit it, we’ve been looking for Mr. Wright for a long time, and now we finally found him…or did we?299Views0likes2CommentsSecurity Sidebar: Regulating the Internet of Things
It seems that just about everything is Internet-connected today…cars, cameras, phones, lights, thermostats, refrigerators, toasters…just to name a few. The so-called “Internet of Things” (IoT) is huge. On one hand, this is an amazing step in the advancement of technology. On the other hand, it’s a gold-mine for exploitation if you’re an attacker. One of the most dangerous aspects of having all these devices connected to the Internet is that they can be used to attack something. A 2015 Gartner study estimated that 6.4 billion devices would be connected to the Internet in 2016 (still too early to have 2017 numbers), and we are on pace to have over 20 billion devices connected by 2020. Add to this the relative ease with which an attacker can take control of a given IoT device, and it paints a pretty scary picture. Some would claim that an attacker taking control of their Internet-connected device is not inherently scary, and depending on which device you are referencing, those people would be right. Take, for instance, your new Internet-connected refrigerator. Let’s say an attacker took control without you knowing about it. You probably couldn’t care less as long as your food stayed cold. All you want is to make sure your milk is ready to go when you pour that amazing bowl of Frosted Flakes for breakfast the next morning (the milk at the end of a Frosted Flakes bowl of cereal is simply the best ever). The dangerous part, though, is that the computing power of your Internet-connected refrigerator (albeit small) could be used as part of a large-scale attack. As long as you aren’t the target of said attack, I guess you don’t completely care (or probably even realize it). You might astutely note that, while there are 6+ billion Internet-connected devices in the world today, not all of them have been hacked and even the ones that have been hacked are not all being used at the same time in an attack. You would be right. But even so, a small percentage could be hacked and used against a target…and a small percentage of 6 billion is still a huge number. We saw this exact situation with the Mirai Botnet attack that took out several popular websites. The power of the Mirai Botnet is built on compromised IoT devices. You don’t want to be the next target of this botnet. So with all this discussion about IoT devices, it brings up an interesting question: Do we need to regulate all of this? After all, if these devices were forced to be built with more security, it would be much harder to hack into them and use them as part of an attack. On the side of “we do not need more regulation” stands many who would claim that regulation will simply add more frustration and bulk to an already-clunky manufacturing and distribution process. Manufacturers don’t see the need to add more security to their devices because it typically doesn’t make financial sense. And, how much more security is enough? If a company can make an Internet-connected toaster at a certain price today, how much more will it cost to produce when added security is required to be built in? This will likely push the price of toaster production past the point of profit for the company. And then the frustrated toaster company won’t be able to make toasters any more. And then people won’t have toast for breakfast. And then people will have to resort to eating regular bread. You see the trend. In addition, customers typically don’t care about the security of their devices as much as they do the functionality of the device. Who cares if my refrigerator is used in a massive botnet attack as long as it keeps my food cold, right? Said differently, I don’t need encrypted milk…I need cold milk. However, there’s the other side that says the government should step in and regulate all of this. I don’t have to tell you that the threats (and execution) of DDoS attacks are growing at an alarming rate, and someone/something needs to step in and help. How can we, with good conscience, stand idly by and watch all this happen without trying to help in some way? Many would call it a moral obligation to do something about this. One wrinkle (of many), though, is that even if the United States passed legislation to regulate the security of “things” connected to the Internet, it still wouldn’t guarantee anything for technologies that are developed/manufactured outside the United States. Is that a reason to do nothing, though? So here we are. Do we add regulation to the IoT, thereby adding cost and possibly forcing companies out of business? Or do we let it all go, and accept the fact that we will see attacks grow in number and intensity?286Views0likes1Comment