security sidebar
27 TopicsSecurity Sidebar: Did Quantum Computing Kill Encryption?
Google recently published results of its newest quantum computing capability with a chip called "Sycamore" and the results are pretty impressive. Classic computer operations rely on a binary 1 or 0 to execute operations. But, quantum computing can take advantage of numbers between 1 and 0 at the same time, thus greatly increasing its computing speed and power. Of course, this quantum computing thing is not easy. Giant companies like Google, IBM, and others have been working hard with large budgets for a long time to figure this thing out. Google's Sycamore Chip In its public release, Google showed that the Sycamore chip could execute calculations that are not possible with classical computers. The specific calculations that the Sycamore chip performed were related to complex random number generation. The Sycamore chip performed the calculations in about 200 seconds. In order to show the significance of how fast this was, the team also ran a simpler version of this same test on the world's fastest supercomputer (not quantum computer) at the Oak Ridge National Laboratory. After the supercomputer completed this simpler task, the team was able to extrapolate the amount of time the supercomputer would have taken to complete the more complex task that Sycamore completed. The team suggested it would have taken the supercomputer about 10,000 years to complete the same task that Sycamore completed in 200 seconds! Google's Quantum Computer To be fair, the task of verifying complex random number generation doesn't necessarily have wide application in today's world. But, that was never really the point of this experiment. The point was to show the potential that quantum computing can have in our world as the technology matures. Some experts have compared this breakthrough to Sputnik in space or the Wright Brothers first airplane flight...while these events arguably didn't have super-impressive results, they certainly paved the way for what would be very significant technology in the future. So, we will see where quantum computing takes us as an industry, but it's certainly proving to show that computing power is getting stronger and faster. Encryption So, how would this affect encryption? Encryption is fundamental to Internet privacy and security. At its core, encryption requires a secret key that the sender and receiver both have in order to encrypt and decrypt the information they send back and forth. Most encryption algorithms used today are widely known, and the developers show exactly how they work and how they were designed. While the security of the encryption is certainly based on its design and mathematical strength, it is also based on the fact that both the sender and receiver have a key that is kept secret. If an attacker steals the key, then game over. The strength of the key is based on the mathematical likelihood that someone (or something) could (or could not) figure it out. If you have followed computer encryption for any length of time, you've no doubt noticed that certain encryption key strengths are no longer recommended. This doesn't automatically mean that the encryption algorithm is not good, it just means the key size needs to be larger so that a computer will take longer figuring out the key. As computer processing power has grown over the years, the need for larger key sizes has also grown. For example, the RSA encryption algorithm (used for server authentication and key exchange) has been tested over the years to see how long it would take a computer to crack the secret key. As you may know, RSA is built on the foundation of prime number factoring where two large prime numbers are multiplied together to get a common value that is shared between the client and server. If a computer could take this large number and figure out the two prime numbers that were multiplied together, then it would know the secret key value. So, the whole foundation of security for RSA encryption is based on the idea that it is very difficult to figure out those two numbers that were multiplied together to get that big shared value. The idea with key size in RSA encryption is that the larger the two prime numbers are, the harder it is to figure them out. Many people have tested RSA over the years, and one group of researchers discussed some results from one of their tests. Several years ago, this team tested a 155-digit number and worked to factor it down. It took them nine years to figure out the factors (and thus the secret key). More recently, they tested a 200-digit number with more modern computing power and it took them about 18 months to crack it. A while later (with still faster computers), they tried a 307-digit number and they factored it down even faster. The point is, as modern computing power gets faster, the time it takes to crack an encryption key gets shorter. A typical RSA implementation today uses 1024-bit key size. Some applications will use 2048-bit key sizes, but the larger the key size, the more load it puts on the client and server, and it slows the web application down. So, there's a tension between strong (large) key size and application speed. Now that Google has shown the ability to use quantum computing to run calculations in 200 seconds that would take today's fastest supercomputers 10,000 years, it's not hard to imagine that an encryption key like the one used in RSA can be cracked in a matter of seconds. If you know a mathematician who designs computer encryption algorithms, tell them that the Internet might be looking for some new stuff pretty soon...607Views1like2CommentsSecurity Sidebar: What's Real, and What's Fake?
Generative Adversarial Networks (GANs) are deep neural net architectures comprised of two networks, pitting one against the other (thus the “adversarial”). These networks can learn to mimic any distribution of data, and they can take input from many different sources in order to create things that are extremely similar to real-world things. Things like images, music, speech, prose, etc. One website uses GANs to study thousands of human faces and then generate faces of people who do not exist. Fake Pictures The website This Person Does Not Exist uses GANs that study thousands of human faces and then generate faces of people who do not exist. Do you know the girl shown below? No, you don't. She doesn't exist. The Generative Network works alongside a Discriminative Network to determine how authentic the picture actually is. In effect, the generative network "generates" the picture (based on real life images) and then the discriminative network provides feedback on whether the picture actually looks real or fake. Here's a cool picture of the process of how these GANs study real picture inputs and then generate fake pictures. On one hand, this is cool and fascinating stuff. On the other, it can get pretty freaky pretty fast. It also makes me think about the picture that my buddy showed me of his new "girlfriend"...I'm gonna need to actually meet the girl to confirm she's a real person. Fake Videos Related to all this, new advancements are coming in the area of artificial intelligence and fake videos. While video manipulation has been around for a relatively long time, researchers at Samsung have recently been able to take a single picture and turn it into a fake video of that person. We all know Miss Mona Lisa, right? Well, have you ever seen her have a conversation. No, because video wasn't around back then. Well, now you can... When you add together the fake images from these GANs and the ability to turn a single picture into a video of that person, you get some crazy possibilities. Maybe the video evidence that has always been so trustworthy in a court room is suddenly not. Maybe your favorite politician gives a private speech on a controversial topic...or maybe they don't? The possibilities can get pretty extensive. In times like these, remember the fateful words of Abraham Lincoln (16th President of the United States): "Never believe everything you see on the Internet."484Views3likes0CommentsSecurity Sidebar: Is Tor No Longer Safe?
The "Dark Web" (sometimes called the "Dark Net") is a collection of thousands of websites that use anonymity tools (like Tor or I2P) to hide their IP address and physical location. These websites are notorious for conducting illegal activity like drug trade, money laundering, prostitution, etc. This Dark Web is fascinating because it seemingly allows all this illegal activity to happen in plain sight. A user who loads one of the anonymity tools and knows the site's URL can easily visit one of these illegal online marketplaces. Take Tor, for instance (it's the most commonly used anonymity software). Tor will encrypt web traffic in layers and route it through randomly-chosen computers around the world. Each computer removes one of the encryption layers before passing it to the next hop point. Because of this, it's extremely difficult (and many times impossible) to match the traffic's origin with its destination. This provides a safe haven for illegal activity to take place in plain sight. Imagine being a law enforcement official who watches all this illegal activity take place right in front of your face every day. You know you can arrest someone for it, but who? You can never trace the activity back to a known location/person. It's easy to understand that law enforcement officials around the world are interested in taking down some of the sites on this Dark Web network...sites like Silk Road 2, Cloud 9, Cannabis Road, and Cash Machine to name just a few. Of course, the problem has always been knowing who and where to strike. "We can now show that they are neither invisible nor untouchable" Welcome to "Operation Onymous." Europol's Cybercrime Centre, the Federal Bureau of Investigations, the US Immigrations and Customs Enforcement, and the Department of Homeland Security announced earlier this month that they had formed a Joint Cybercrime Action Team and spent six months preparing to take down many of these illegal sites on Tor. Troels Oerting (head of the Cybercrime Centre) said “we have demonstrated that we are able to efficiently remove vital criminal infrastructures that are supporting serious organised crime. And we are not 'just' removing these services from the open Internet; this time we have also hit services on the Darknet using Tor where, for a long time, criminals have considered themselves beyond reach. We can now show that they are neither invisible nor untouchable." Many reports disagree on the actual number of sites that were taken down, but even the lowest estimates leave us doubting whether or not the feds were able to crack the secure foundation of the Tor network. If only one or two sites had been compromised, you could reasonably believe poor OPSEC contributed to the problem. But, when approximately 50 sites were taken down, it makes you wonder if the entire foundation of Tor anonymity was compromised. When the feds were finished with their operation, 17 people were arrested including Blake Benthall who is said to have managed and administered the online drug marketplace Silk Road 2.0. The Silk Road site once looked like this: ...but now looks like this: As you can imagine, Tor is none too pleased. A recent post relayed this message: "Tor is most interested in understanding how these services were located and if this indicates a security weakness in Tor hidden services that could be exploited by criminals or secret police repressing dissents." I guess you always run the risk of law enforcement involvement when you provide an anonymous service and knowingly allow illegal activity to be so pervasive. Tor is hoping that, when these convicts face trial, the police will have to explain how they broke in. The police offered a different sentiment when they said “this is something we want to keep for ourselves. The way we do this, we can’t share with the whole world, because we want to do it again and again and again.” Despite this global force crackdown, many Tor (and Silk Road) users remain cautiously optimistic (Silk Road has been taken down before, by the way). One user said “I predict that we will bounce back, stronger than before, but at this point I’m pretty freaked out.” I guess no matter how you slice it or how you go about accomplishing it, crime doesn't pay.449Views0likes7CommentsSecurity Sidebar: Regulating the Internet of Things
It seems that just about everything is Internet-connected today…cars, cameras, phones, lights, thermostats, refrigerators, toasters…just to name a few. The so-called “Internet of Things” (IoT) is huge. On one hand, this is an amazing step in the advancement of technology. On the other hand, it’s a gold-mine for exploitation if you’re an attacker. One of the most dangerous aspects of having all these devices connected to the Internet is that they can be used to attack something. A 2015 Gartner study estimated that 6.4 billion devices would be connected to the Internet in 2016 (still too early to have 2017 numbers), and we are on pace to have over 20 billion devices connected by 2020. Add to this the relative ease with which an attacker can take control of a given IoT device, and it paints a pretty scary picture. Some would claim that an attacker taking control of their Internet-connected device is not inherently scary, and depending on which device you are referencing, those people would be right. Take, for instance, your new Internet-connected refrigerator. Let’s say an attacker took control without you knowing about it. You probably couldn’t care less as long as your food stayed cold. All you want is to make sure your milk is ready to go when you pour that amazing bowl of Frosted Flakes for breakfast the next morning (the milk at the end of a Frosted Flakes bowl of cereal is simply the best ever). The dangerous part, though, is that the computing power of your Internet-connected refrigerator (albeit small) could be used as part of a large-scale attack. As long as you aren’t the target of said attack, I guess you don’t completely care (or probably even realize it). You might astutely note that, while there are 6+ billion Internet-connected devices in the world today, not all of them have been hacked and even the ones that have been hacked are not all being used at the same time in an attack. You would be right. But even so, a small percentage could be hacked and used against a target…and a small percentage of 6 billion is still a huge number. We saw this exact situation with the Mirai Botnet attack that took out several popular websites. The power of the Mirai Botnet is built on compromised IoT devices. You don’t want to be the next target of this botnet. So with all this discussion about IoT devices, it brings up an interesting question: Do we need to regulate all of this? After all, if these devices were forced to be built with more security, it would be much harder to hack into them and use them as part of an attack. On the side of “we do not need more regulation” stands many who would claim that regulation will simply add more frustration and bulk to an already-clunky manufacturing and distribution process. Manufacturers don’t see the need to add more security to their devices because it typically doesn’t make financial sense. And, how much more security is enough? If a company can make an Internet-connected toaster at a certain price today, how much more will it cost to produce when added security is required to be built in? This will likely push the price of toaster production past the point of profit for the company. And then the frustrated toaster company won’t be able to make toasters any more. And then people won’t have toast for breakfast. And then people will have to resort to eating regular bread. You see the trend. In addition, customers typically don’t care about the security of their devices as much as they do the functionality of the device. Who cares if my refrigerator is used in a massive botnet attack as long as it keeps my food cold, right? Said differently, I don’t need encrypted milk…I need cold milk. However, there’s the other side that says the government should step in and regulate all of this. I don’t have to tell you that the threats (and execution) of DDoS attacks are growing at an alarming rate, and someone/something needs to step in and help. How can we, with good conscience, stand idly by and watch all this happen without trying to help in some way? Many would call it a moral obligation to do something about this. One wrinkle (of many), though, is that even if the United States passed legislation to regulate the security of “things” connected to the Internet, it still wouldn’t guarantee anything for technologies that are developed/manufactured outside the United States. Is that a reason to do nothing, though? So here we are. Do we add regulation to the IoT, thereby adding cost and possibly forcing companies out of business? Or do we let it all go, and accept the fact that we will see attacks grow in number and intensity?296Views0likes1CommentSecurity Sidebar: Your Device Just Attacked Someone…And You Didn’t Even Know It
Large scale Distributed Denial of Service (DDoS) attacks are no joke, and attackers can use them to inflict substantial damage to just about any target they want. Depending on the target, the result of a DDoS attack could run the gamut of a simple pain in the neck because your website just went down to a significant financial loss for major corporations and customers. But regardless, these attacks are serious business, and the bad news is that they are becoming both easier to launch and more devastating to endure. Large-scale DDoS attacks will utilize a network of unsuspecting devices to inflict pain on their target. This network of unsuspecting devices is known as a Botnet. In order to build a Botnet, an attacker needs to gain access to many different Internet-connected devices. Back in the day, that was a tough job because there weren’t very many Internet-connected devices to go around. But today? No problem. Just think about all the devices that are Internet-connected. You probably own at least 5 of them yourself (smart phone, smart TV, computer, tablet, etc). With the proliferation of these Internet-connected devices, building a Botnet is easier now than ever before. Botnets are created by scanning the Internet for vulnerable devices in order to install malware on a device that will be used later to help launch the attack. The scanning typically happens one of two ways. The first is to port scan for specific servers and attempt to gain access by brute force guessing the username and password of the device. The second uses external scanners to find new bots and, in some cases, botnet servers that already control a multitude of bots. If you can gain control of a botnet server, then you gain control of all the bots it controls. Alternatively, if you don’t want to go through the hassle of building your own botnet, then you can always rent one out from one of many DDoS providers who will DDoS a target for you. Either way, it’s a powerful weapon. So, when these botnets are created or expanded, which vulnerable devices should they look for? I guess it doesn’t significantly matter what the device is as long as it has the capability to help launch the attack. That said, you have to wonder how many vulnerable devices are out there to be used in one of these botnets. No one knows the exact number of vulnerable devices (and it depends on the vulnerability being exploited as to which device is vulnerable), but suffice it to say, the explosion of Internet-connected devices have made it extremely easy to find millions of vulnerable devices. The truth is, attackers don’t need a desktop or laptop computer to launch an attack anymore. Now, they can go after devices like your home router, DVR, or IP camera to launch an attack. How many times do you change the default username/password on your home router? Or your IP camera? Or what about another device that gets shipped from the manufacturer with preloaded credentials that you don’t even have the ability to change? You can see how easy it is to find vulnerable devices. Security researcher and advocate Brian Krebs knows all too well about attacks from botnets. Last month, his site KrebsOnSecurity.com was hit by a DDoS attack that launched over 620 Gbps at his site. The site was taken down for the better part of a week. He had a DDoS protection provider in place, but when 620 Gbps of traffic is hurled at one target, it’s extremely difficult for a DDoS protection provider to keep up. In the end, the provider said they couldn’t handle it and they told him he had to find another provider to protect his site. This attack was almost double the size of the largest attack they had ever seen…and they are a big, capable DDoS protection provider. Krebs has since turned to Google and their new Project Shield program for protection. As for the attack, Krebs said “the huge assault this week on my site appears to have been launched almost exclusively by a very large botnet of hacked devices.” Brian Krebs is certainly not the only target of a massive DDoS attack. I could spend hours listing known DDoS attacks and still not cover them all. These things are real, and they are serious. To add insult to injury, many experts believe that people are actively researching ways to use these massive botnets to take down the Internet itself. Once upon a time, only well-funded nation states had the resources to launch massive attacks against a given enemy. That’s no longer the case. Certainly, a well-funded nation state could launch a devastating attack against a target…but so could the lowly owner of the massive botnet. Could someone literally take down the Internet? And, will your unsuspecting device help do it?217Views0likes0CommentsSecurity Sidebar: Improving Your SSL Labs Test Grade
Encrypt everything. That's what Google Chairman Eric Schmidt recently said. His comments were in response to various surveillance efforts that he considered government overreach and censorship. His rationale...if you are going to spy on everything I send across the Internet, then I'll simply encrypt it all so you can't read it. Other companies like Facebook, Twitter, Yahoo, and many others have taken similar steps. In addition, Mark Nottingham (chairman of the group developing the new HTTP/2 protocol) said, "I believe the best way that we can meet the goal of increasing use of TLS on the Web is to encourage its use by only using HTTP/2 with https:// URIs." With all this encryption momentum from giants in the industry, the HTTPS path has been paved and everyone who wants to stay relevant will have to get on board. So, the world is moving to "encrypt everything" and you want to follow suit. Unfortunately, there are many different options to consider when implementing SSL on your web server. Wouldn't it be nice to just have a checkbox that said "click here for SSL implementation"? It's not that simple. Fortunately, there are many different web-based tools that allow you to score the effectiveness of your web server's SSL implementation. Many of these tools provide recommendations on how to improve your web server's security and make it stronger and more efficient. Some of these include Wormly, SSL Shopper, DigiCert, and GlobalSign to name a few. Some of these tools just give you basic certificate information while others dig a little deeper into performance and known vulnerability status. There's no magic formula or mandate that forces any of these tools to look at one thing over another, so they all test things a little bit differently. That said, the undisputed industry thought leader in this space is Qualys SSL Labs. Qualys does a great job of conducting a comprehensive inspection of the SSL implementation on your web server. Some may question the need for having a good grade on the SSL Labs test, but imagine a customer checking, for example, their bank website and finding a bad grade for SSL implementation. If my bank had a failing grade on SSL implementation, it would certainly get my attention and it might make me think twice about moving my money and my business elsewhere. Even though an organization may not totally agree with the way Qualys approaches web server testing, it's still important to understand their testing methodology so as to align SSL implementation practices with their recommendations. How does SSL Labs approach web server testing? They have a fairly short and easy to read SSL Server Rating Guide that outlines the exact methodology they use for testing. Their approach consists of 4 steps: Look at a certificate to verify that it's valid and trusted Inspect server configuration in three categories: Protocol support Key exchange support Cipher support Combine the category scores into an overall score (a score of zero in any category will push the overall score to zero), then calculate an overall letter grade Apply a series of rules to handle aspects of server configuration that cannot be expressed via numerical scoring The final letter grade is based on the following overall numerical score: Numerical Score Letter Grade >= 80 A >= 65 B >= 50 C >= 35 D >= 20 E < 20 F Who knew you could get an "E" grade?!? I'm pretty sure I've received every other letter grade on that scale at some point in my life, but never an E. By the looks of where it fits on the scale, I don't want to start now. One other note about the grading scale...in certain situations the standard A-F grades are not quite applicable and are out of scope. To handle this, SSL Labs has introduced the "M" grade (certificate name mismatch) and the "T" grade (site certificate is not trusted). So, when you are reviewing your score and you see the "M" or the "T" you don't have to wonder what happened with the scoring results. Anyway, let's quickly look at each of the 4 areas they test. Certificate Inspection Three certificate types are currently in use: domain-validated, organization-validated, and extended-validation (EV) certificates. SSL Labs only requires that a certificate be correct and does not go beyond that basic requirement. They do recommend EV certificates for higher-value web sites but they have no way of knowing the purpose of each web site so they simply check to make sure the site's certificate is valid and trusted. However, they do note some certificate issues that will immediately result in a zero score: Domain name mismatch Certificate not yet valid Certificate expired Use of a self-signed certificate Use of a certificate that is not trusted (unknown CA or some other validation error) Use of a revoked certificate Insecure certificate signature (MD2 or MD5) Insecure key Server Configuration The three criteria used for server configuration are protocol support (30% of grade), key exchange (30% of grade), and cipher strength (40% of grade). Protocol support is graded against the following criteria: Protocol Score SSL 2.0 0% SSL 3.0 80% TLS 1.0 90% TLS 1.1 95% TLS 1.2 100% They start with the score of the best protocol used on your web server and then add the score of the worst protocol and then divide the total by 2. This doesn't account for any protocols in between the best and worst on your site, but that's why it's important to understand how they calculate all this stuff. For example, if your site supports SSL 3.0, TLS 1.1, and TLS 1.2, your score would be (100 + 80) / 2 = 90. How would you increase that score? Well, if you continued support for TLS 1.1 and TLS 1.2 and dropped support for SSL 3.0, your score would move up to (100 + 95) / 2 = 97.5. Key exchange is graded against the following criteria: Key Exchange Score Weak key (Debian OpenSSL flaw) 0% Anonymous key exchange (no authentication) 0% Key or DH parameter strength < 512 bits 20% Exportable key exchange (limited to 512 bits) 40% Key or DH parameter strength < 1024 bits (e.g., 512) 40% Key or DH parameter strength < 2048 bits (e.g., 1024) 80% Key or DH parameter strength < 4096 bits (e.g., 2048) 90% Key or DH parameter strength >= 4096 bits (e.g., 4096) 100% Cipher strength is the final piece of the server configuration equation. Servers can support varying strengths of ciphers, so SSL Labs scores the cipher strength the same way they do the protocol strength...take the score of the strongest cipher, add the score of the weakest cipher, and divide by 2. The scores for each cipher are as follows: Cipher Strength Score 0 bits (no encryption) 0% < 128 bits (e.g., 40, 56) 20% < 256 bits (e.g., 128, 168) 80% >= 256 bits (e.g., 256) 100% Sample Web Server Let's say your web server has the following configuration: Valid and trusted certificate Protocol support for TLS 1.0 and TLS 1.1 RSA key with 2048 bit strength Cipher algorithm is AES/CBC with 256 bit strength In this case, you would score a 92.5 for protocol support, a 90 for key exchange, and a 100 for cipher strength. Protocol support accounts for 30% of the overall grade, so you multiply 92.5 by 30%. Key exchange is also 30% of the overall grade, and cipher strength is 40% of the overall grade. Using these values, you would score a (92.5 * 30%) + (90 * 30%) + (100 * 40%) = 94.75. Converting this numerical score to a Letter Grade would yield an overall "A" score. Congratulations! Important Things to Consider... SSL Labs periodically changes their grading criteria and methodology based on changes in technology. Here are some changes that they have published (updated Feb 2018): SSL 2.0 is not allowed (results in an automatic "F") Insecure renegotiation is not allowed (results in an automatic "F") Vulnerability to the BEAST attack caps the grade at B Vulnerability to the CRIME attack caps the grade at C (previously capped at "B" but changed in the May 2015 test version) The test results no longer show the numerical score (0-100) because they realized that the letter grade (A-F) is more useful (they still calculate the numerical score...they just don't show it to you) No longer require server-side mitigation for the BEAST attack Support for TLS 1.2 is now required to get an A grade. Without it, the grade is capped at a B If vulnerable to the Heartbleed attack, automatic "F" grade If vulnerable to the OpenSSL CVE-2014-0224 vulnerability, automatic"F" grade Keys below 2048 bits (e.g., 1024) are now considered weak, and the grade is capped at a B Keys under 1024 bits are now considered insecure (results in an automatic "F") Warnings have been introduced as part of the rating criteria. In most cases, warnings are about issues that do not yet affect the grade, but likely will in the future. Server administrators are advised to correct the warnings as soon as possible. Some examples are: Warning: RC4 is used with TLS 1.1 or newer protocol. Because RC4 is weak, the only reason to use it is to mitigate the BEAST attack. For some, BEAST is still a threat. Because TLS 1.1 and newer are not vulnerable to BEAST, there is no reason to use RC4 with them Warning: No support for Forward Secrecy Warning: Secure renegotiation is not supported Grade A- is introduced for servers with generally good configuration that have one ore more warnings Grade A+ is introduced for servers with exceptional configurations. At the moment, this grade is awarded to servers with good configuration, no warnings, and HTTP Strict Transport Security support with a max-age of at least 6 months MD5 certificate signatures are now considered insecure (results in an automatic "F") Clarified that insecure certificate signatures affect the certificate score. This has always been the case for MD2 Clarified that the strength of DHE and ECDHE parameters affects key exchange scoring. This has always been the case, but previous revisions of the text were not clear about it An A+ score is not awarded to servers that use SHA1 certificates Overall grade is capped at C if vulnerable to POODLE attack An A+ score is not awarded to servers that don’t support TLS_FALLBACK_SCSV Overall grade is capped at "B" if SSL 3 is supported Overall grade is capped at "B" if RC4 is supported Overall grade is capped at "B" if the certificate chain is incomplete Servers that have SSL 3.0 as their best protocol automatically get an "F" If using weak DH parameters (less than 1024bits), grade is automatically set to "F" If using weak DH parameters (less than 2048 bits), grade capped at "B" If using export cipher suites, grade is automatically set to "F" If vulnerable to CRIME attack, best grade is capped at "C" (was "B" prior to May 2015 test version) Cap grade at "C" if RC4 is used with TLS 1.1+ Cap grade at "C" if not supporting TLS 1.2 Fail servers that support only RC4 suites Detect when RSA exponent 1 is used. This is insecure and gets an automatic "F" Hosts that have HPKP issues can't get an A+ grade Servers vulnerable to DROWN attack get an automatic "F" grade If vulnerable to CVE-2016-2107 (Padding oracle in AES-NI CBC MAC check), grade is an automatic "F" Introduce a penalty (grade capped at C) for using 3DES (and other ciphers with block sizes of 64 bits) with TLS 1.1+ SHA1 certificates are nolonger trusted; results in a "T" grade Introduced an explicit penalty for using cipher suites weaker than 112 bits. This was necessary to address a flaw in the SSL Labs grading algorithm that didn't sufficiently penalize these weak suites. WoSign/StartCom certificates are distrusted andwill result in a "T"grade If vulnerable to Ticketbleed (CVE-2016-9244), the grade is an automatic "F" In addition to these updates, SSL Labs is planning to add more criteria changes in March, 2018. These include: Penalty for not using forward secrecy (grade capped at "B"). Not using Forward Secrecy is currently a warning, but will soon affect the actual grade of your web server. Theywill not penalize sites that use suites without forward secrecy provided they are never negotiated with clients that can do better. Penalty for not using AEAD suites (grade capped at "B").Your site should use secure cipher suites, andAEAD is the only encryption approach without any known weaknesses. Also, the new TLS 1.3 protocol supports only AEAD suites.In their new grading criteria, websites will be required to use AEAD suites to get an"A". However, as with forward secrecy, theywill not penalize sites if they continue to use non-AEAD suites provided AEAD suites are negotiated with clients that support them. Penalty forReturn Of Bleichenbacher Oracle Threat (ROBOT) vulnerability (automatic "F" grade). ROBOTis an attack model based on Daniel Bleichenbacher chosen-ciphertext attack. Bleichenbacher discovered an adaptive-chosen ciphertext attack against protocols using RSA, he demonstrated the ability to perform RSA private-key operations. Researchers have been able to exploit the same vulnerability with small variations to the Bleichenbacher attack. The ROBOT vulnerability was a warning in the past, but will now be used in the grading algorithm. Note: F5 has provided mitigation steps for the ROBOT vulnerability in article K21905460: BIG-IP SSL vulnerability (ROBOT) CVE-2017-6168. Penalty for using Symantec Certificates (grade of "T" will be given). Starting March 1, 2018, SSL Labs will give “T” grade for Symantec certificates issued before June 2016. Hopefully you can start to see how your overall grade can change based on different options and configurations. As SSL Labs changes their grading criteria and testing methodology (i.e. will support for HTTP 2.0 be needed for an "A" grade in the future?) you should stay aware of what they are doing and how your web site is affected by their changes. It's important to check back periodically to see how your grade looks...your customers are certainly checking on you! After all, if you're gonna "encrypt everything" you might as well encrypt it correctly. Knowing all this, you can more easily configure your web server to go from this grade... To this grade... Here's to great web site configurations, effective security, and A+ grades!2.8KViews1like11CommentsSecurity Sidebar: Hands Up…This is a HEIST
The Fear… The newly discovered HEIST (HTTP Encrypted Information can be Stolen through TCP-Windows) vulnerability is making some noise, and people are rightfully freaked out a little bit. HEIST is accomplished purely in the browser and attacks SSL/TLS channels to expose sensitive encrypted data such as emails, Social Security Numbers, etc. It works by hiding a JavaScript file on a webpage (made to look safe in an advertisement or maybe even hosted directly on a site), and enticing the user to interact with said JavaScript file. Once the interaction has happened, the malicious code queries a variety of web pages and measures the exact size of the encrypted data that those pages transmit. Once the size of the responses is known, the attackers can use a previously-identified compression exploit (such as CRIME or BREACH) to gain access to the data inside those encrypted packets. Probably the most interesting characteristic of this vulnerability is that it removes the need for a “man-in-the-middle” position. Until now, this compression-based exploit required the attacker to be able to actively manipulate the traffic passing between the Web server and end user. One of the researchers who discovered this exploit said it like this: “Before, the attacker needed to be in a Man-in-the-Middle position to perform attacks such as CRIME and BREACH. Now, by simply visiting a website owned by a malicious party, you are placing your online security at risk.” The most damaging aspect of HEIST is found by exploiting BREACH, as it allows the attacker to read out CSRF tokens. Depending on the functionality offered by the website, knowing the CSRF token could allow the attacker to take over the complete account of the victim. The simple solution to all this is to tell users to never visit a website owned by a malicious party, right? Yeah, right. So, what can you do to mitigate this vulnerability? The Redemption… Fortunately, the BIG-IP offers several countermeasures to help protect from this HEIST vulnerability. Because HEIST relies on compression attacks like CRIME and BREACH, the first countermeasure is to disable HTTP compression on user input pages. Static content can still be compressed, though. Next, configure your BIG-IP ASM for CSRF protection. One of the ways BIG-IP ASM mitigates CSRF attacks is by adding a random CSRF token to every URL. For example, if an HTML response page contains the following URI reference: a href="https://host.domain.com/default.aspx" The BIG-IP ASM (with CSRF protection enabled) will rewrite the URI reference to appear similar to the following: a href="https://host.domain.com/default.aspx?CSRT=17017154763700437104" This token cannot be guessed in advance by an attacker and therefore makes the CSRF attack almost impossible. The BIG-IP ASM also has a domain cookie protection feature. If an attacker were to use HEIST (or some other exploit) to get the authentication cookie, he must also obtain the rotating ASM cookie that contains a signature of all the other cookies. It’s a scary world out there, but it’s a little less scary when the BIG-IP is protecting your critical web applications!254Views0likes0CommentsSecurity Sidebar: Support Your Local Security Conference
I've never picked a lock in my life...until today. I had the chance to attend ShowMeCon, and one of the expo booths was a hands-on experience for lock-picking. The experience was really cool, and it reminded me of the great things you can learn when you support local security conferences. ShowMeCon is the premier hacking and security conference in the St Louis area. They had lots of great speakers and tons of great information to learn. I heard one speaker outline the details of how he used social engineering techniques to pose as a pest control bug spray guy and walked right into the vault of more than one bank in the Las Vegas, NV area. He was hired by the bank president to test the security of the bank. The first branch he walked in, he simply said he was there to spray for bugs, and the bank employees escorted him straight to the vault...no questions asked! Needless to say, the bank president was none too pleased with the performance of his employees. The power of social engineering and the impact of human emotion was clearly shown during his presentation. It was great. I also heard from Dan Tentler who was hired by a journalist to hack into his life. The journalist was doing a documentary on how easy it is to ruin someone's life through hacking, and the journalist was the guinea pig in this case. The presentation by the hacker was fantastic, and he showed just how incredibly simple it is to take control of someone else's life. My favorite part of the presentation was when he showed emails from him to the journalist that included pictures of the journalist taken from the webcam of his own MacBook Pro. The journalist was pretty freaked out when he saw the pictures of himself...taken from his own computer. The documentary is very interesting and can be found here. Kevin Johnson also spoke on the subject of ethics in security research. He did a great job of explaining how modern bug bounties impact the security perception of corporations today. He successfully incorporated several "My Cousin Vinny" examples (which is a rare and awesome talent), and he also talked about being branded "that guy" when your company screws up a security related incident. To emphasize his point, he referenced an incident back in the 1970's where a beached whale was found on the shores of Florence, Oregon. To clean up the whale carcass, the head engineer on the job decided to use a massive amount of dynamite to blow the whale to bits so that seagulls and other scavengers would eat the pieces over time. As it turns out, the idea did not go as planned. And, the engineer on site was forever known as "the whale guy". Check out the video here. And, don't be that guy. Other topics included credit card token manipulation, breaking cipher block chain encryption modes through collision attacks, antivirus evasion, and red team/blue team techniques for hacking and defense. The conference was well worth the price of admission, and I would highly encourage any and all of you to take advantage of security conferences in your area. Who knows, you might just learn some new tricks like I did. That lock I'm holding in the picture below is the actual evidence of my mad picking skillz. And that other unlocked yellow one on the table...yeah, I got that one too!258Views0likes2CommentsSecurity Sidebar: My Browser Has No Idea Your Certificate Was Just Revoked
Encryption is a fundamental reality on the Internet today. Most sites use SSL/TLS for encryption, and you can identify these sites by the https:// in the address bar of your browser. The Internet security service company Netcraft has been tracking SSL usage for over 20 years now, and their most recent data shows that there are now more than one thousand times more certificates on the web today than in 1996. DevCentral is no exception to this SSL phenomenon…go ahead, check your browser’s address bar and notice the address for this article (or anything else on DevCentral for that matter) will start with https:// instead of plain old http://. This SSL/TLS encryption provides a secure means of communication between your browser and the web server. In order to make all this encryption happen, encryption keys are shared between the web server and your browser. Encryption key exchange gets very complicated and this article is not meant to explain all the details of encryption key exchange mechanisms, but from a very high-level perspective, it’s fair to say that these keys are shared by using the web server’s SSL/TLS certificate. When a user visits a secure website, an encryption key exchange process takes place, and the resulting encryption keys are used to encrypt all communication between that user and the web server. A certificate is a digital file that holds several pieces of information related to a particular website. One of the pieces of information it holds is the public portion of the encryption key used to encrypt all the communications to/from the web server. Another piece of information it holds is the effective dates of the certificate. After all, these things are only good for a finite period of time (typically 1-2 years). In a perfect world, a web server would be issued a certificate and that certificate would never get compromised and it would be used for the full duration of the life of the certificate. But we don’t live in a perfect world. The reality is that certificates get compromised all the time, and when that happens, the certificate needs to be revoked. Typically when a web server certificate is revoked, a new certificate is created and used in place of the old, revoked certificate. But, how does a user know that a certificate has been revoked? The Magic of CRL and OCSP Here’s how it works…when a user visits a secure website, the certificate is sent from the website to the user’s browser (Chrome, Firefox, Internet Explorer, Safari, etc). Because certificate sharing creates significant computational overhead, many browsers simply store the certificate information from a previously-visited website in their cache so they don’t have to keep asking for a new certificate each time they visit that website. This is nice because it significantly speeds up the user experience for loading that particular secure website, but it also presents a problem when the certificate is no longer valid. In order to check that a given certificate is still valid, the concept of a Certificate Revocation List (CRL) was introduced. The CRL is a digital file created by a Certification Authority (the organization that creates and distributes certificates) that contains the serial number for each certificate that has been revoked by that CA. In order for a browser to check that a given certificate is still valid, the CRL must be downloaded and the serial number for the website you are visiting must be checked against the CRL to ensure the certificate is not revoked. If the certificate is not revoked, all is good and the browser displays the page. But if the certificate has been revoked, the browser should display a warning page that tells you the certificate has been revoked. Some browsers will allow you to continue to the page anyway and others won’t…it just depends on the browser. The CRL check method is computationally expensive because the browser has to download the CRL every time it needs to check a certificate (which happens very frequently), and it also has to search through the CRL for a match of the serial number of the certificate it is using (CRL files can get extremely large). In order to avoid frequent CRL downloads and searches, some browsers will cache the CRL for a given period of time and simply check the cached CRL instead of downloading a new one each time. This helps speed things up, but what if the CRL changed since the last time it was cached in your browser? This situation became a big enough problem that a new, faster solution was introduced. The Online Certificate Status Protocol (OCSP) was developed in 1999 and it is a solution that queries an online database of serial numbers for revoked certificates. Rather than host CRL files, CAs can instead setup an OCSP server and include its URL in issued certificates. Clients that support OCSP can then query the database to see if a given certificate has been revoked, instead of downloading the entire CRL file. OCSP is a much more efficient solution than the CRL method. Here’s a quick view of the certificate issued to f5.com. Notice it was issued by the Entrust Certification Authority, therefore Entrust is the one who manages the CRL file for all their revoked certificates and they also manage the list of serial numbers used in the OCSP queries. The picture on the left is the detailed view of the f5.com certificate with the CRL location listed. This is the URL where the browser can go download the CRL from Entrust. Keep in mind that this CRL is just the one managed by Entrust. I took the liberty of visiting the URL where the CRL is located, and the details for the CRL are shown in the picture on the right. Notice that the CRL is simply a big list of serial numbers with revocation dates and reasons for revocation. The following screenshot is the OCSP portion of the f5.com certificate. Notice the URL listed in the details section at the bottom of the screenshot. This is the URL that the browser will visit in order to check the current certificate serial number against the database of revoked serial numbers managed by Entrust. So Many Certificates, So Little Time… Now that we understand how a browser checks to see if a certificate is still valid, let’s take a little deeper look at the different types of certificates available today. There are three types of certificates you can purchase today: Domain Validation (DV), Organization Validation (OV), and Extended Validation (EV). The DV certificate is the cheapest and most popular type of certificate. Each CA makes up its own rules as to what they require from an organization before they will issue the DV certificate. Typically it’s a very simple (and many times, automated) process like the CA sending you a file and you placing that file on the webserver at the domain in question…just something simple to let the CA know that they are issuing a certificate for a given domain to the actual owner. There are, of course, many different ways to hack this process and get a DV certificate for a domain that you don’t own. But, that’s a topic for another day. The OV certificate is more expensive than the DV, and it takes things a bit further with respect to the CA checking that the requesting organization actually owns the domain name. There are some other organizational vetting procedures that a CA might take in order to more fully understand that the requestor is the owner. In addition to what they would do for a DV certificate, maybe they’ll make a few reference phone calls and do some simple background checks. But, then again, it’s totally up to the individual CA to develop their own procedures for vetting an organization prior to issuing an OV certificate. The EV certificate is the most expensive, and it requires the most amount of background checking and validation from a CA before it is approved. In fact, the process for vetting an organization prior to issuing an EV certificate is governed by the CA/Browser Forum. This organization has developed a robust list of requirements that must be met before a CA can issue an EV certificate. Companies that want the EV certificate do so because they want to show the world that they are serious about the security of their online presence. The reality is that users visit these secure sites via an Internet Browser (Chrome, Firefox, Internet Explorer, etc), so it’s interesting to see how these different browsers handle certificate revocation and also how they treat the different kinds of certificates. The top 3 browsers on the Internet today are Google Chrome (70.4%), Mozilla Firefox (17.5%), and Microsoft Internet Explorer (5.8%). Almost 94% of all Internet traffic is displayed by one of these three browsers. Let’s take a quick minute to see how these check for certificate revocation. Google Chrome Google Chrome is by far the most popular browser today. When it comes to certificate revocation checking, Chrome blazes its own trail and does its own thing. The CRL and OCSP methods of certificate revocation checking are the industry standards, but Google isn’t standard in this space. It has created what’s known as a CRLSet to check for certificate status. A CRLSet is Google’s own list of revoked certificates that it compiles and updates when it crawls the CRLs from the major CAs around the world. Instead of checking an OCSP responder or a CRL, Google Chrome simply checks its own CRLSet for certificate status when visiting a secure website. Google claims that this is faster and safer for the user than the traditional CRL or OCSP methods. Some people think this approach is good because it’s faster to check a locally stored list than using the traditional methods and you also don’t have to worry about OCSP responder or CRL distribution point availability. But others are skeptical because the CRLSet is only comprised of certificates that Google deems worthy to include. What if the certificate you need to check isn’t on the CRLSet list? Also, the CRLSet file size is explicitly limited to 250KB. If something happens and lots of certificates are suddenly revoked causing the CRLSet to get bigger than 250KB (Heartbleed for example), then certificates are deleted from the CRLSet so that it stays at the 250KB max size. What if the certificate status you need gets deleted from the CRLSet during one of these bloat sessions? Mozilla Firefox Firefox allows you to check for revoked certificates via the OCSP method, but it doesn’t use the CRL at all. If a given certificate includes an OCSP address in the Authority Information Access (AIA) portion of the certificate, then Firefox will query the OCSP server to make sure the certificate is not revoked. If the OCSP server isn’t available or if the OCSP address is not present in the AIA field of the certificate, then Firefox won’t check revocation status and will present an error message (which the user can click through to proceed anyway). Check out the screenshot below for the settings in Firefox (version 46.0.1). Microsoft Internet Explorer Interestingly, Microsoft lE conducts the most comprehensive certificate revocation check of these leading browsers. The default setting is like Firefox…it checks the OCSP responder if the address is present in the AIA field of the certificate. But, if the OCSP server is not available or if the OCSP address is not present, it will then check the CRL (it checks the CRL loaded in cache if possible so that it doesn’t have to continually download a large CRL file). If neither of these is available, it will present a warning page and give the user an option of either proceeding forward with an unknown certificate status or closing out of the browser. The following screenshot shows the settings for certificate revocation checks in IE: You can see that each browser handles certificate revocation a little different than the next. So, it’s entirely possible that a revoked certificate could fall through the cracks if Google decided not to add it to their CRLSet, Firefox couldn’t contact the OCSP server, and Internet Explorer had an outdated version of the CRL stored in cache. Be careful out there...946Views0likes2CommentsSecurity Sidebar: Will The Real “Satoshi Nakamoto” Please Stand Up?
Some people love the anonymity that the Internet offers. You can lurk in the shadows using a random pseudonym, and if you are careful enough it’s likely that no one will ever know who you are. Back in 2008, an inventor named “Satoshi Nakamoto” created a peer-to-peer electronic cash system known as Bitcoin. The system was first used in 2009, and it has steadily gained popularity ever since. Bitcoin is fascinating because it is a completely legitimate form of currency but is not backed by any central monetary authority. Conventional currency is issued by a central bank and backed by something…maybe gold or silver or something similar. Once upon a time, the United States dollar was backed by a huge cache of gold predominantly stored in a very secure facility at Fort Knox. Bitcoin is not backed by anything and not issued by any central bank. Instead, it is comprised of a peer-to-peer network made up of its user’s computers. These computers use their power to solve complex mathematic problems (known as “mining”) and the complexity of these problems grows over time. Because the difficulty of these problems grows over time, the number of Bitcoins allowed in circulation is naturally limited. When a new Bitcoin is mined, all the computers in the network have to agree on the newly mined Bitcoin. So, the power of the value of Bitcoins relies upon the fact that all the users in the Bitcoin network are invested in this process and won’t stand for any nefarious activity that would devalue the hard work they used to generate their own Bitcoins. As of this article, one single Bitcoin is worth about $450.00. Here’s a quick chart that shows the value of Bitcoins over time. When Bitcoin was released, most everything about it was made completely public. The code, the protocol, the processes…everything. Everything except the identity of Satoshi Nakamoto. Of course, this mysterious identity has prompted people to want to know the man…or woman…or group of people…behind the genius system that is Bitcoin. And for seven years, no one has known. Back in 2014, a man really named Satoshi Nakamoto was famously accused of being the inventor of Bitcoin, but he adamantly denied this charge. Many other people have speculated as to the actual identity of Nakamoto, but no one really knows. Here’s a little list of people I found who are thought to be the real Satoshi Nakamoto: Michael Clear, a graduate cryptography student at Dublin's Trinity College Neal King, Vladimir Oksman, and Charles Bry Martii Malmi, a developer living in Finland Jed McCaleb, a lover of Japanese culture and resident of Japan Donal O'Mahony and Michael Peirce, computer scientists Professor Shinichi Mochizuki, Japanese mathematician Dorian S Nakamoto, a Japanese man residing in California Hal Finney, developer Michael Weber, develper Wei Dai, developer Nick Szabo, technical writer Among those included on the “who is the real Satoshi Nakamoto” list was an Australian entrepreneur named Craig S Wright. Wright had a fairly plausible resume when it came to associating his name with Nakamoto. He had many emails, transcripts, and other documents that defended the claim that his name should be on the Nakamoto list. How do we know he had all this stuff? Someone hacked into his business email account and found it all, of course! The problem is…Wright never said he was Nakamoto. In fact, ALL these people on the Nakamoto list have either outright denied that they are Nakamoto or have never come forward and claimed they are him. That is…until today. Craig S. Wright contacted the BBC, the Economist, and GQ to identify himself as the real Satoshi Nakamoto. At a meeting in London, Wright met with these three media organizations and also invited several prominent Bitcoinn developers and scientists. He used cryptographic proofs to show that he was, in fact, the father of Bitcoin. Now, he plans to move one of the "Satoshi Nakamoto" Bitcoins to further prove he is Nakamoto. The incredible transparency of Bitcoin transfers will make this move a pretty rock-solid proof that he really is who he says. He claims that he came forward because some of the people he cares about deeply were being falsely accused of many malicious things related to the Nakamoto identity. He finally came to the realization that he needed to prove his identity to save his friends from any more of these attacks. Let’s admit it, we’ve been looking for Mr. Wright for a long time, and now we finally found him…or did we?309Views0likes2Comments