certificates
8 TopicsUsing Cryptonice to check for HTTPS misconfigurations in devsecops workflows
Co-Author: Katie Newbold, F5 Labs Intern, Summer 2020 A huge thanks to Katie Newbold, lead developer on the Cryptonice project, for her amazing work and patience as I constantly moved the goal posts for this project. ---F5 Labs recently published an article Introducing the Cryptonice HTTPS Scanner. Cryptonice is aimed at making it easy for everyone to scan for and diagnose problems with HTTPS configurations. It is provided as a is a command line tool and Python library that allows a user to examine the TLS protocols and ciphers, certificate information, web application headers and DNS records for one or more supplied domain names. You can read more about why Cryptonice was released over at F5 Labs but it basically boils down a few simple reasons. Primarily, many people fire-and-forget their HTTPS configurations which mean they become out of date, and therefore weak, over time. In addition, other protocols, such as DNS can be used to improve upon the strength of TLS but few sites make use of them. Finally, with an increasing shift to automation (i.e. devsecops) it’s important to integrate TLS testing into the application lifecycle. How do I use Cryptonice? Since the tool is available as an executable, a Python script, and a Python library, there are a number of ways and means in which you might use Cryptonice. For example: The executable may be useful for those that do not have Python 3 installed and who want to perform occasional ad-hoc scans against internal or external websites The Python script may be installed along side other Python tools which could allow an internal security team to perform regular and scriptable scanning of internal sites The Python library could be used within devops automation workflows to check for valid certificates, protocols and ciphers when new code is pushed into dev or production environments The aforementioned F5 Labs article provides a quick overview of how to use the command line executable and Python script. But this is DevCentral, after all, so let’s focus on how to use the Python library in your own code. Using Cryptonice in your own code Cryptonice can output results to the console but, since we’re coding, we’ll focus on the detailed JSON output that it produces. Since it collects all scan and test results into a Python dictionary, this variable can be read directly, or your code may wish to read in the completed JSON output. More on this later. First off we’ll need to install the Cryptonice library. With Python 3 installed, we simply use the pip command to download and install it, along with its dependencies. pip install cryptonice Installing Cryptonice using pip will also install the dependent libraries: cffi, cryptography , dnspython, http-client, ipaddress, nassl, pycurl, pycparser, six, sslyze, tls-parser, and urllib3. You may see a warning about the cryptography library installation if you have a version that is greater than 2.9, but cryptonice will still function. This warning is generated because the sslyze package currently requires the cryptography library version to be between 2.6 and 2.9. Creating a simple Cryptonice script An example script (sample_script.py) is included in the GitHub repository. In this example, the script reads in a fully formatted JSON file called sample_scan.json from the command line (see below) and outputs the results in to a JSON file whose filename is based on the site being scanned. The only Cryptonice module that needs to be imported in this script is scanner. The JSON input is converted to a dictionary object and sent directly to the scanner_driver function, where the output is written to a JSON file through the writeToJSONFile function. from cryptonice import scanner import argparse import json def main(): parser = argparse.ArgumentParser() parser.add_argument("input_file", help="JSON input file of scan commands") args = parser.parse_args() input_file = args.input_file with open(input_file) as f: input_data = json.load(f) output_data, hostname = scanner.scanner_driver(input_data) if output_data is None and hostname is None: print("Error with input - scan was not completed") if __name__ == "__main__": main() In our example, above, the scanner_driver function is being passed the necessary dictionary object which is created from the JSON file being supplied as a command line parameter. Alternatively, the dictionary object could be created dynamically in your own code. It must, however, contain the same key/value pairs as our sample input file, below: This is what the JSON input file must look like: { "id": string, "port": int, "scans": [string] "tls_params":[string], "http_body": boolean, "force_redirect": boolean, "print_out": boolean, "generate_json": boolean, "targets": [string] } If certain parameters (such as “scans”, “tls_parameters”, or “targets”) are excluded completely, the program will abort early and print an error message to the console. Mimicking command line input If you would like to mimic the command line input in your own code, you could write a function that accepts a domain name via command line parameter and runs a suite of scans as defined in your variable default_dict: from cryptonice.scanner import writeToJSONFile, scanner_driver import argparse default_dict = {'id': 'default', 'port': 443, 'scans': ['TLS', 'HTTP', 'HTTP2', 'DNS'], 'tls_params': ["certificate_info", "ssl_2_0_cipher_suites", "ssl_3_0_cipher_suites", "tls_1_0_cipher_suites", "tls_1_1_cipher_suites", "tls_1_2_cipher_suites", "tls_1_3_cipher_suites", "http_headers"], 'http_body': False, 'print_out': True, 'generate_json': True, 'force_redirect': True } def main(): parser = argparse.ArgumentParser(description="Supply commands to cryptonice") parser.add_argument("domain", nargs='+', help="Domain name to scan", type=str) args = parser.parse_args() domain_name = args.domain if not domain_name: parser.error('domain (like www.google.com or f5.com) is required') input_data = default_dict input_data.update({'targets': domain_name}) output_data, hostname = scanner_driver(input_data) if output_data is None and hostname is None: print("Error with input - scan was not completed") if __name__ == "__main__": main() Using the Cryptonice JSON output Full documentation for the Cryptonice JSON output will shortly be available on the Cryptonice ReadTheDocs pages and whilst many of the key/value pairs will be self explanatory, let’s take a look at some of the more useful ones. TLS protocols and ciphers The tls_scan block contains detailed information about the protocols, ciphers and certificates discovered as part of the handshake with the target site. This can be used to check for expired or expiring certificates, to ensure that old protocols (such as SSLv3) are not in use and to view recommendations. cipher_suite_supported shows the cipher suite preferred by the target webserver. This is typically the best (read most secure) one available to modern clients. Similarly, highest_tls_version_supported shows the latest available version of the TLS protocol for this site. In this example, cert_recommendations is blank but is a certificate were untrusted or expired this would be a quick place to check for any urgent action that should be taken. The dns section shows results for cryptographically relevant DNS records, for example Certificate Authority Authorization (CAA) and DKIM (found in the TXT records). In our example, below, we can see a dns_recommendations entry which suggested implementing DNS CAA since no such records can be found for this domain. { "scan_metadata":{ "job_id":"test.py", "hostname":"example.com", "port":443, "node_name":"Cocumba", "http_to_https":true, "status":"Successful", "start":"2020-07-1314:31:09.719227", "end":"2020-07-1314:31:16.939356" }, "http_headers":{ "Connection":{ }, "Headers":{ }, "Cookies":{ } }, "tls_scan":{ "hostname":"example.com", "ip_address":"104.127.16.98", "cipher_suite_supported":"TLS_AES_256_GCM_SHA384", "client_authorization_requirement":"DISABLED", "highest_tls_version_supported":"TLS_1_3", "cert_recommendations":{ }, "certificate_info":{ "leaf_certificate_has_must_staple_extension":false, "leaf_certificate_is_ev":false, "leaf_certificate_signed_certificate_timestamps_count":3, "leaf_certificate_subject_matches_hostname":true, "ocsp_response":{ "status":"SUCCESSFUL", "type":"BasicOCSPResponse", "version":1, "responder_id":"17D9D6252267F931C24941D93036448C6CA91FEB", "certificate_status":"good", "hash_algorithm":"sha1", "issuer_name_hash":"21F3459A18CAA6C84BDA1E3962B127D8338A7C48", "issuer_key_hash":"37D9D6252767F931C24943D93036448C2CA94FEB", "serial_number":"BB72FE903FA2B374E1D06F9AC9BC69A2" }, "ocsp_response_is_trusted":true, "certificate_0":{ "common_name":"*.example.com", "serial_number":"147833492218452301349329569502825345612", "public_key_algorithm":"RSA", "public_key_size":2048, "valid_from":"2020-01-1700:00:00", "valid_until":"2022-01-1623:59:59", "days_left":552, "signature_algorithm":"sha256", "subject_alt_names":[ "www.example.com" ], "certificate_errors":{ "cert_trusted":true, "hostname_matches":true } } }, "ssl_2_0":{ "preferred_cipher_suite":null, "accepted_ssl_2_0_cipher_suites":[] }, "ssl_3_0":{ "preferred_cipher_suite":null, "accepted_ssl_3_0_cipher_suites":[] }, "tls_1_0":{ "preferred_cipher_suite":null, "accepted_tls_1_0_cipher_suites":[] }, "tls_1_1":{ "preferred_cipher_suite":null, "accepted_tls_1_1_cipher_suites":[] }, "tls_1_2":{ "preferred_cipher_suite":"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "accepted_tls_1_2_cipher_suites":[ "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA" ] }, "tls_1_3":{ "preferred_cipher_suite":"TLS_AES_256_GCM_SHA384", "accepted_tls_1_3_cipher_suites":[ "TLS_CHACHA20_POLY1305_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_AES_128_CCM_SHA256", "TLS_AES_128_CCM_8_SHA256" ] }, "tests":{ "compression_supported":false, "accepts_early_data":false, "http_headers":{ "strict_transport_security_header":{ "preload":false, "include_subdomains":true, "max_age":15768000 } } }, "scan_information":{ }, "tls_recommendations":{ } }, "dns":{ "Connection":"example.com", "dns_recommendations":{ "Low-CAA":"ConsidercreatingDNSCAArecordstopreventaccidentalormaliciouscertificateissuance." }, "records":{ "A":[ "104.127.16.98" ], "CAA":[], "TXT":[], "MX":[] } }, "http2":{ "http2":false } } Advanced Cryptonice use - Calling specific modules There are 6 files in Cryptonice that are necessary for its functioning in other code. scanner.py and checkport.py live in the Cryptonice folder, and getdns.py, gethttp.py. gethttp2.py and gettls.py all live in the cryptonice/modules folder. A full scan is run out of the scanner_driver function in scanner.py, which generates a dictionary object based on the commands it receives via the input parameter dictionary. scanner_driver is modularized, allowing you to call as many or as few of the modules as needed for your purposes. However, if you would like to customize the use of the cryptonice library further, individual modules can be selected and run as well. You may choose to call the scanner_driver function and have access to all the modules in one location, or you could call on certain modules while excluding others. Here is an example of a function that calls the tls_scan function in modules/gettls.py to specifically make use of the TLS code and none of the other modules. from cryptonice.modules.gettls import tls_scan def tls_info(): ip_address = "172.217.12.196" host = "www.google.com" commands = ["certificate_info"] port = 443 tls_data = tls_scan(ip_address, host, commands, port) cert_0 = tls_data.get("certificate_info").get("certificate_0") # Print certificate information print(f'Common Name:\t\t\t{cert_0.get("common_name")}') print(f'Public Key Algorithm:\t\t{cert_0.get("public_key_algorithm")}') print(f'Public Key Size:\t\t{cert_0.get("public_key_size")}') if cert_0.get("public_key_algorithm") == "EllipticCurvePublicKey": print(f'Curve Algorithm:\t\t{cert_0.get("curve_algorithm")}') print(f'Signature Algorithm:\t\t{cert_0.get("signature_algorithm")}') if __name__ == "__main__": tls_info() Getting Started The modularity of Cryptonice makes it a versatile tool to be used within other projects. Whether you want to use it to test the strength of an internal website using the command line tool or integrate the modules into another project, Cryptonice provides a detailed and easy way to capture certificate information, HTTP headers, DNS restrictions, TLS configuration and more. We plan to add additional modules to query certificate transparency logs, test for protocols such as HTTP/3 and produce detailed output with guidance on how to improve your cryptographics posture on the web. This is version 1.0, and we encourage the submission of bugs and enhancements to our Github page to provide fixes and new features so that everyone may benefit from them. The Cryptonice code and binary releases are maintained on the F5 Labs Github pages. Full documentation is currently being added to our ReadTheDocs page and the Cryptonice library is available on PyPi.: F5 Labs overview: https://www.f5.com/labs/cryptonice Source and releases: https://github.com/F5-Labs/cryptonice PyPi library: https://pypi.org/project/cryptonice Documentation: https://cryptonice.readthedocs.io698Views3likes1CommentPost of the Week: SSL on a Virtual Server
In this Lightboard Post of the Week, I answer a few questions about SSL/https on Virtual Servers. BIG-IP being a default deny, full proxy device, it's important to configure specific ports, like 443, to accept https traffic along with client and server side profiles and include your SSL certificates. We cover things like SAN certificates but I failed to mention that self-signed certificates are bad anywhere except for testing or on the server side of the connection. Thanks to DevCentral members, testimony, Only1masterblaster, Faruk AYDIN, MrPlastic, Tyler G, Prince, and dward for their Q/A engagement. Posted Questions on DevCentral: https on virtual server LINKING SSL CERTIFICATE TO A VIRTUAL SERVER SSL CERTIFICATE KEY Maximum number of client SSL profiles per virtual server? Need to support thousands of unique SSL certificates on a single VIP ps851Views0likes0CommentsHow to deploy more SSL sites with fewer SSL certificates
There is an increasing need to deploy SSL sites (SPDY, HTTP/2, and SSL Everywhere). Traditional SSL sites require deploying one SSL certificate per site. This gets very expensive ordering and maintaining many SSL certificates. Consolidating SSL resources by utilizing wildcard and Subject Alternative Names (SAN) certificates reduces the maintenance and cost of deploying SSL sites. SSL Review A quick review on how SSL works. You type a name into your web browser (i.e. www.mycompany.example), the web browser connects and verifies that the certificate that is presented is signed by a trusted party and that the name matches the requested name. Should any of these checks fail, you get a nasty-gram from your browser. In the past this required a single SSL certificate per IP address, but Server Name Indication (SNI) makes it possible to attach multiple SSL certificates to a single IP address as long as the client supports SNI. Traditional Cert A traditional certificate only contains a single name. store.mycompany.example Wildcard Certs A wildcard cert replaces a single name with a wildcard character. Browsers will treat the “*” character as any valid name. *.blog.mycompany.example Subject Alternative Names SAN cert are similar to a traditional SSL cert with the added bonus that you can provide a list of “alternative” names that are valid. For example a SAN cert could be limited to only the following names www.hr.mycompany.example benefits.hr.mycompany.example jobs.hr.mycompany.example What type of cert to use These examples highlight my recommendations for what type of certificate to use. Sites that transact sensitive date (i.e. SSN or CCN) should have their own certificate. Sites that have a low level of security and a high number of names would benefit from a wildcard certificate. A middle ground would be a SAN certificate. When using SAN certs it is best to group together by organization or security classification. How to consolidate services The above should help in reducing the number of certificates that you have. If you want to further reduce the number of IP addresses that you’re using for your sites please read my companion article on Routing HTTP by request headers. There's also another DevCentral article about SSL Profiles that you can learn more about SSL.415Views0likes2CommentsDispelling the New SSL Myth
Claiming SSL is not computationally expensive is like saying gas is not expensive when you don’t have to drive to work every day. My car is eight years old this year. It has less than 30,000 miles on it. Yes, you heard that right, less than 30,000 miles. I don’t drive my car very often because, well, my commute is a short trip down two flights of stairs. I don’t need to go very far when I do drive it’s only ten miles or so round trip to the grocery store. So from my perspective, gas isn’t really very expensive. I may use a tank of gas a month, which works out to … well, it’s really not even worth mentioning the cost. But for someone who commutes every day – especially someone who commutes a long-distance every day – gas is expensive. It’s a significant expense every month for them and they would certainly dispute my assertion that the cost of gas isn’t a big deal. My youngest daughter, for example, would say gas is very expensive – but she’s got a smaller pool of cash from which to buy gas so relatively speaking, we’re both right. The same is true for anyone claiming that SSL is not computationally expensive. The way in which SSL is used – the ciphers, the certificate key lengths, the scale – has a profound impact on whether or not “computationally expensive” is an accurate statement or not. And as usual, it’s not just about speed – it’s also about the costs associated with achieving that performance. It’s about efficiency, and leveraging resources in a way that enables scalability. It’s not the cost of gas alone that’s problematic, it’s the cost of driving, which also has to take into consideration factors such as insurance, maintenance, tires, parking fees and other driving-related expenses. MYTH: SSL is NOT COMPUTATIONALLY EXPENSIVE TODAY SSL is still computationally expensive. Improvements in processor speeds in some circumstances have made that expense less impactful. Circumstances are changing. Commoditized x86 hardware can in fact handle SSL a lot better today than it ever could before –when you’re using 1024-bit keys and “easy” ciphers like RC4. Under such parameters it is true that commodity hardware may perform efficiently and scale up better than ever when supporting SSL. Unfortunately for proponents of SSL-on-the-server, 1024-bit keys are no longer the preferred option and security professionals are likely well-aware that “easy” ciphers are also “easy” pickings for miscreants. In January 2011, NIST recommendations regarding the deployment of SSL went into effect. While NIST is not a standards body can require compliance or else, they can and do force government and military compliance and have shown their influence with commercial certificate authorities. All commercial certificate authorities now issue only 2048-bit keys. This increase has a huge impact on the capacity of a server to process SSL and renders completely inaccurate the statement that SSL is not computationally expensive anymore. A typical server that could support 1500 TPS using 1024-bit keys will only support 1/5 of that (around 300 TPS) when supporting modern best practices, i.e. 2048-bit keys. Also of note is that NIST recommends ephemeral Diffie-Hellman - not RSA - for key exchange, and per TLS 1.0 specification, AES or 3DES-EDE-CBC, not RC4. These are much less “easy” ciphers than RC4 but unfortunately they are also more computationally intense, which also has an impact on overall performance. Key length and ciphers becomes important to the performance and capacity of SSL not just during the handshaking process, but in bulk-encryption rates. It is one thing to say a standard server deployed to support SSL can handle X handshakes (connections) and quite another to simultaneously perform bulk-encryption on subsequent data responses. The size and number of those responses have a huge impact on the consumption rate of resources when performing SSL-related functions on the overall server’s capacity. Larger data sets require more cryptographic attention that can drag down the rate of encryption – that means slower response times for users and higher resource consumption on servers, which decreases resources available for handshaking and server processing and cascades throughout the entire system to result in a reduction of capacity and poor performance. Tweaked configurations, poorly crafted performance tests, and a failure to consider basic mathematical relationships may seem to indicate SSL is “not” computationally expensive yet this contradicts most experience with deploying SSL on the server. Consider this question and answer in the SSL FAQ for the Apache web server: Why does my webserver have a higher load, now that it serves SSL encrypted traffic? SSL uses strong cryptographic encryption, which necessitates a lot of number crunching. When you request a webpage via HTTPS, everything (even the images) is encrypted before it is transferred. So increased HTTPS traffic leads to load increases. This is not myth, this is a well-understood fact – SSL requires higher computational load which translates into higher consumption of resources. That consumption of resources increases with load. Having more resources does not change the consumption of SSL, it simply means that from a mathematical point of view the consumption rates relative to the total appear to be different. The “amount” of resources consumed by SSL (which is really the amount of resources consumed by cryptographic operations) is proportional to the total system resources available. The additional consumption of resources from SSL is highly dependent on the type and size of data being encrypted, the load on the server from both processing SSL and application requests, and on the volume of requests. Interestingly enough, the same improvements in capacity and performance of SSL associated with “modern” processors and architecture is also applicable to intermediate SSL-managing devices. Both their specialized hardware (if applicable) and general purpose CPUs significantly increase the capacity and performance of SSL/TLS encrypted traffic on such solutions, making their economy of scale much greater than that of server-side deployed SSL solutions. THE SSL-SERVER DEPLOYED DISECONOMY of SCALE Certainly if you have only one or even two servers supporting an application for which you want to enable SSL the costs are going to be significantly different than for an organization that may have ten or more servers comprising such a farm. It is not just the computational costs that make SSL deployed on servers problematic, it is also the associated impact on infrastructure and the cost of management. Reports that fail to factor in the associated performance and financial costs of maintaining valid certificates on each and every server – and the management / creation of SSL certificates for ephemeral virtual machines – are misleading. Such solutions assume a static environment and a deep pocket or perhaps less than ethical business practices. Such tactics attempt to reduce the capital expense associated with external SSL intermediaries by increasing the operational expense of purchasing and managing large numbers of SSL certificates – including having a ready store that can be used for virtual machine instances. As the number of services for which you want to provide SSL secured communication increase and the scale of those services increases, the more costly it becomes to manage the required environment. Like IP address management in an increasingly dynamic environment, there is a diseconomy of scale that becomes evident as you attempt to scale the systems and processes involved. DISECONOMY of SCALE #1: CERTIFICATE MANAGEMENT Obviously the more servers you have, the more certificates you need to deploy. The costs associated with management of those certificates – especially in dynamic environments – continues to rise and the possibility of missing an expiring certificate increase with the number of servers on which certificates are deployed. The promise of virtualization and cloud computing is to address the diseconomy of scale; the ability to provision and ready-to-function server complete with the appropriate web or application stack serving up an application for purposes of scale assumes that everything is ready. Unless you’re failing to properly provision SSL certificates you cannot achieve this with a server-deployed SSL strategy. Each virtual image upon which a certificate is deployed must be pre-configured with the appropriate certificate and keys and you can’t launch the same one twice. This has the result of negating the benefits of a dynamically provisioned, scalable application environment and unnecessarily increases storage requirements because images aren’t small. Failure to recognize and address the management and resulting impact on other areas of infrastructure (such as storage and scalability processes) means ignoring completely the actual real-world costs of a server-deployed SSL strategy. It is always interesting to note the inability of web servers to support SSL for multiple hosts on the same server, i.e. virtual hosts. Why can't I use SSL with name-based/non-IP-based virtual hosts? The reason is very technical, and a somewhat "chicken and egg" problem. The SSL protocol layer stays below the HTTP protocol layer and encapsulates HTTP. When an SSL connection (HTTPS) is established Apache/mod_ssl has to negotiate the SSL protocol parameters with the client. For this, mod_ssl has to consult the configuration of the virtual server (for instance it has to look for the cipher suite, the server certificate, etc.). But in order to go to the correct virtual server Apache has to know the Host HTTP header field. To do this, the HTTP request header has to be read. This cannot be done before the SSL handshake is finished, but the information is needed in order to complete the SSL handshake phase. Bingo! Because an intermediary terminates the SSL session and then determines where to route the requests, a variety of architectures can be more easily supported without the hassle of configuring each and every web server – which must be bound to IP address to support SSL in a virtual host environment. This isn’t just a problem for hosting/cloud computing providers, this is a common issue faced by organizations supporting different “hosts” across the domain for tracking, for routing, for architectural control. For example, api.example.com and www.example.com often end up on the same web server, but use different “hosts” for a variety of reasons. Each requires its own certificate and SSL configuration – and they must be bound to IP address – making scalability, particularly auto-scalability, more challenging and more prone to the introduction of human error. The OpEx savings in a single year from SSL certificate costs alone could easily provide an ROI justification for the CapEx of deploying an SSL device before even considering the costs associated with managing such an environment. CapEx is a onetime expense while OpEx is recurring and expensive. DISECONOMY of SCALE #2: CERTIFICATE/KEY SECURITY The simplistic nature of the argument also fails to take into account the sensitive nature of keys and certificates and regulatory compliance issues that may require hardware-based storage and management of those keys regardless of where they are deployed (FIPS 140-2 level 2 and above). While there are secure and compliant HSM (Hardware Security Modules) that can be deployed on each server, this requires serious attention and an increase of management and skills to deploy. The alternative is to fail to meet compliance (not acceptable for some) or simply deploy the keys and certificates on commoditized hardware (increases the risk of theft which could lead to far more impactful breaches). For some IT organizations to meet business requirements they will have to rely on some form of hardware-based solution for certificate and key management such as an HSM or FIPS 140-2 compliant hardware. The choices are deploy on every server (note this may become very problematic when trying to support virtual machines) or deploy on a single intermediary that can support all servers at the same time, and scale without requiring additional hardware/software support. DISECONOMY of SCALE #3: LOSS of VISIBILITY / SECURITY / AGILITY SSL “all the way to the server” has a profound impact on the rest of the infrastructure, too, and the scalability of services. Encrypted traffic cannot be evaluated or scanned or routed based on content by any upstream device. IDS and IPS and even so-called “deep packet inspection” devices upstream of the server cannot perform their tasks upon the traffic because it is encrypted. The solution is to deploy the certificates from every machine on the devices such that they can decrypt and re-encrypt the traffic. Obviously this introduces unacceptable amounts of latency into the exchange of data, but the alternative is to not scan or inspect the traffic, leaving the organization open to potential compromise. It is also important to note that encrypted “bad” traffic, e.g. malicious code, malware, phishing links, etc… does not change the nature of that traffic. It’s still bad, it’s also now “hidden” to every piece of security infrastructure that was designed and deployed to detect and stop it. A server-deployed SSL strategy eliminates visibility and control and the ability to rapidly address both technical and business-related concerns. Security is particularly negatively impacted. Emerging threats such as a new worm or virus for which AV scans have not yet but updated can be immediately addressed by an intelligent intermediary – whether as a long-term solution or stop-gap measure. Vulnerabilities in security protocols themselves, such as the TLS man-in-the-middle attack, can be immediately addressed by an intelligent, flexible intermediary long before the actual solutions providing the service can be patched and upgraded. A purely technical approach to architectural decisions regarding the deployment of SSL or any other technology is simply unacceptable in an IT organization that is actively trying to support and align itself with the business. Architectural decisions of this nature can have a profound impact on the ability of IT to subsequently design, deploy and manage business-related applications and solutions and should not be made in a technical or business vacuum, without a full understanding of the ramifications. The Anatomy of an SSL Handshake [Network Computing] Get Ready for the Impact of 2048-bit RSA Keys [Network Computing] SSL handshake latency and HTTPS optimizations [semicomplete.com] Black Hat: PKI Hack Demonstrates Flaws in Digital Certificate Technology [DarkReading] SSL/TLS Strong Encryption: FAQ [apache.org] The Open Performance Testing Initiative The Order of (Network) Operations Congratulations! You do no nothing faster than anyone else! Data Center Feng Shui: SSL WILS: SSL TPS versus HTTP TPS over SSL F5 Friday: The 2048-bit Keys to the Kingdom TLS Man-in-the-Middle Attack Disclosed Yesterday Solved Today with Network-Side Scripting306Views0likes2CommentsDevCentral Top5 02/04/2011
If your week has been anything like mine, then you’ve had plenty to keep you busy. While I’d like to think that your “busy” equates to as much time on DevCentral checking out the cool happenings while people get their geek on as mine does, I understand that’s less than likely. Fortunately, though, there is a mechanism by which I can distribute said geeky goodness for your avid assimilation. I give to you, the DC Top 5: iRuling the New FSE Crop http://bit.ly/f1JIiM Easily my favorite thing that happened this week was something I was fortunate enough to get to be a part of. A new crop of FSEs came through corporate this week to undergo a training boot camp that has been, from all accounts, a smashing success. A small part of this extensive readiness regimen was an iRules challenge issued unto the newly empowered engineers by yours truly. Through this means they were intended to learn about iRules, DevCentral, and the many resources available to them for researching and investigating any requirements and questions they may have. The results are in as of today and I have to say I’m duly impressed. I’ll post the results next week but, for now, here’s a taste of the challenge that was issued. Keep in mind these people range from a few weeks to maybe a couple months tops experience with F5, let alone iRules or coding in general, so this was a tall order. The gauntlet was laid down and the engineers answered, and answered with vigor. Stay tuned for more to come. Mitigate Java Vulnerabilities with iRules http://bit.ly/gbnPOe Jason put out a fantastic blog post this week showing how to thwart would be JavaScript abusing villains by way of iRules fu. Naturally I was interested so I investigated further. It turns out there was a vuln that cropped up plenty last week dealing with a specific string (2.2250738585072012e-308) that has a nasty habit of making the Java runtime compiler go into an infinite loop and, eventually, pack up its toys and go home. This is, as Jason accurately portrayed, “Not good.”. Luckily though iRules is able to leap to the rescue once more, as is its nature. By digging through the HTTP::request variable, Jason was able to quickly and easily strip out any possibly harmful instances of this string in the request headers. For more details on the problem, the process and the solution, click the link and have a read. F5 Friday: ‘IPv4 and IPv6 Can Coexist’ or ‘How to eat your cake and have it too’ http://bit.ly/ejYYSW Whether it was the promise of eating cake or the timely topic of IPv4 trying to cling to its last moments of glory in a world hurtling quickly towards an IPv6 existence I don’t know, but this one drew me in. Lori puts together an interesting discussion, as is often the case, in her foray into the “how can these two IP formats coexist” arena. With the reality of IPGeddon acting as the stick, the carrot of switching to an IPv6 compatible lifestyle seems mighty tasty for most businesses that want to continue being operational once the new order sets in. Time is quickly running out, as are the available IPv4 addresses, so the hour is nigh for decisions to be made. This is a look at one way in which you can exist in the brave new world of 128-bit addressing without having to reconfigure every system in your architecture. It’s interesting, timely, and might just save you 128-bits worth of headaches. Deduplication and Compression – Exactly the same, but different http://bit.ly/h8q0OS There’s something that got passed over last week because of an absolute overabundance of goodness that I wanted to bring up this week, as I felt it warranted some further review and discussion. That is, Don’s look at Deduplication and Compression. Taking the angle of the technologies being effectively the same is an interesting one to me. Certainly they aren’t the same thing, right? Clearly one prevents data from being transmitted while the other minimizes the transmission necessary. That’s different, right? Still though, as I was reading I couldn’t help but find myself nodding in agreeance as Don laid out the similarities. Honestly, they really do accomplish the same thing, that is minimizing what must pass through your network, even though they achieve it by different means. So which should you use when? How do they play together? Which is more effective for your environment? All excellent questions, and precisely why this post found its way into the Top5. Go have a look for yourself. Client Cert Fingerprint Matching via iRules http://bit.ly/gY2M69 Continuing in the fine tradition of the outright thieving of other peoples’ code to mold into fodder for my writing, this week I bring to you an awesome snippet from the land down under. Cameron Jenkins out of Australia was kind enough to share his iRule for Client Cert Fingerprint matching with the team. I immediately pounced on it as an opportunity to share another cool example of iRules doing what they do best: making stuff work. This iRule shows off an interesting way to compare cert fingerprints in an attempt to verify a cert’s identity without needing to store the entirety of the cert and key. It’s also useful for restricting access to a given list of certs. Very handy in some situations, and a wickedly simple iRule to achieve that level of functionality. Good on ya, Cameron, and thanks for sharing. There you have it, another week, another 5 piece of hawesome from DevCentral. See you next time, and happy weekend. #Colin183Views0likes0CommentsBeware Using Internal Encryption as an IT Security Blanket
It certainly sounds reasonable: networks are moving toward a perimeter-less model so the line between internal and external network is blurring. The introduction of cloud computing as overdraft protection (cloud-bursting) further blurs that perimeter such that it’s more a suggestion than a rule. That makes the idea of encrypting everything whether it’s on the internal or external network seem to be a reasonable one. Or does it? THE IMPACT ON OPERATIONS A recent post posits that PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing. The arguments are valid, but there is a catch (there’s always a catch). Consider this nugget from the article: Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. In addition, layer 2 security features should be enabled on the access switches carrying said data. Be sure to unencrypt your data streams before sending them to IPS, DLP, and other deep packet inspection devices. This is easy to say but in many cases harder to implement in practice. If you run into any issues feel free to post them here. I realize this is a controversial topic for security geeks (like myself) but given recent PCI breaches that took advantage of the above weaknesses, I have to error on the side of security. Sure more security doesn’t always mean better security, but smarter security always equals better security, which I believe is the case here. [emphasis added] It is the reminder to decrypt data streams before sending them to IPS, DLP, and other “deep packet inspection devices” that brings to light one of the issues with such a decision: complexity of operations and management. It isn’t just the additional latency inherent in the decryption of secured data streams required for a large number of the devices in an architecture to perform their tasks that’s the problem, though that is certainly a concern. The larger problem is the operational inefficiency that comes from the decryption of secured data at multiple points in the architecture. See there’s this little thing called “keys” that have to be shared with every device in the data center that will decrypt data, and that means managing each of those key stores in their own right. Keys are the, well, key to the kingdom of data encryption and if they are lost or stolen it can be disastrous to the security of all affected systems and applications. By better securing data in flight through encryption of all data on the internal network an additional layer of insecurity is introduced that must be managed. But let’s pretend this additional security issue doesn’t exist, that all systems on which these keys are stored are secure (ha!). Operations must still (a) configure every inline device to decrypt and re-encrypt the data stream and (b) manage the keys/certificates on every inline device. That’s in addition to managing the keys/certificates on every endpoint for which data is destined. There’s also the possibility that intermediate devices for which data will be decrypted before receiving – often implemented using spanned/mirrored ports on a switch/router – will require a re-architecting of the network in order to implement such an architecture. Not only must each device be configured to decrypt and re-encrypt data streams, it must be configured to do so for every application that utilizes encryption on the internal network. For an organization with only one or two applications this might not be so onerous a task, but for organizations that may be using multiple applications, domains, and thus keys/certificates, the task of deploying all those keys/certificates and configuring each device and then managing them through the application lifecycle can certainly be a time-consuming process. This isn’t a linear mathematics problem, it’s exponential. For every key or certificate added the cost of managing that information increases by the number of devices that must be in possession of that key/certificate. INTERNAL ENCRYPTION CAN HIDE REAL SECURITY ISSUES The real problem, as evinced by recent breaches of payment card processing vendors like Heartland Systems is not that data was or was not encrypted on the internal network, but that the systems through which that data was flowing were not secured. Attackers gained access through the systems, the ones we are pretending are secure for the sake of argument. Obviously, pretending they are secure is not a wise course of action. One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. This is a very important point so let me say it again: One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. It would seem, then, that the larger issue here is the security of the systems and devices through which sensitive data must travel and that encryption is really just a means of last resort for data traversing the internal network. Internal encryption is often a band-aid which often merely covers up the real problem of insecure systems and poorly implemented security policies. Granted, in many industries internal encryption is a requirement and must be utilized, but those industries also accept and grant IT the understanding that costs will be higher in order to implement such an architecture. The additional costs are built into the business model already. That’s not necessarily true for most organizations where operational efficiency is now just as high a priority as any other IT initiative. The implementation of encryption on internal networks can also lead to a false sense of security. It is important to remember that encrypted tainted data is still tainted data; it is merely hidden from security systems which are passive in nature unless the network is architected (or re-architected) such that the data is decrypted before being channeled through the solutions. Encryption hides data from prying eyes, it does nothing to ensure the legitimacy of the data. Simply initiating a policy of “all data on all networks must be secured via encryption” does not make an organization more secure and in fact it may lead to a less secure organization as it becomes more difficult and costly to implement security solutions designed to dig deeper into the data and ensure it is legitimate traffic free of taint or malicious intent. Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. The “bottom line” is everyone with confidential data to protect – which is just about every IT organization out there – needs to understand the ramifications of enabling encryption across the internal network both technically and from a cost/management perspective. Encryption of data on internal networks is not a bad thing to do at all but it is also not a panacea. The benefits of implementing internal encryption need to be weighed against the costs and balanced with risk and not simply tossed blithely over the network like a security blanket. PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing The Real Meaning of Cloud Security Revealed The Unpossible Task of Eliminating Risk Damned if you do, damned if you don't The IT Security Flowchart813Views0likes1CommentSANS Top 25 Epic Fail: CWE-319
If you've taken the time to read over the "Top 25 Most Dangerous Programming Errors" published by SANS recently, you may (or may not) have noticed that CWE-319 is an anomaly, and should be easily picked out by developers and security professionals in a game called "which one of these is not like the other". CWE-319 If your software sends sensitive information across a network, such as private data or authentication credentials, that information crosses many different nodes in transit to its final destination. Attackers can sniff this data right off the wire, and it doesn't require a lot of effort. All they need to do is control one node along the path to the final destination, control any node within the same networks of those transit nodes, or plug into an available interface. Trying to obfuscate traffic using schemes like Base64 and URL encoding doesn't offer any protection, either; those encodings are for normalizing communications, not scrambling data to make it unreadable. Prevention and Mitigations Architecture and Design Secret information should not be transmitted in cleartext. Encrypt the data with a reliable encryption scheme before transmitting. Implementation When using web applications with SSL, use SSL for the entire session from login to logout, not just for the initial login page. Operation Configure servers to use encrypted channels for communication, which may include SSL or other secure protocols. 1. This is not a "programming error" The first problem with the inclusion of this "error" on the list is that it is not a programming error. It may be a poor design, architectural, or deployment decision, but it is not an "error". While not necessarily a problem with the actual weakness described, the misnomer is frustrating and undermines the rest of the list, most of which are actual errors in coding practices that need to be addressed. SSL can be easily enabled by any customer, regardless of how the web application is written. Using SSL has always been suggested as part of a secure architecture, and it is organizations not using SSL that bear the burden of failure to implement this simple security scheme, not necessarily developers. Trying to force software vendors to force SSL on its customers is an end-run around the sad fact that most organizations fail to implement proper encryption when necessary. 2. Mitigation through encryption can disrupt security systems internally SSL enabled servers require that the organization obtain and manage the appropriate server-side certificates. SSL usage is the responsibility of the organization deploying the software, not the software vendor. Ensuring the web application works correctly when deployed using SSL may be the vendor's responsibility, but configuring it that way is clearly a matter of architectural choice on the part of the organization deploying the software. It is likely that this remediation solution was intended to direct developers to always use HTTPS instead of HTTP when loading URLs, rather than using relative paths. This likely requires rework on the part of the developers of web applications to obtain the host name dynamically before constructing the proper URL rather than using relative paths. This would also require organizations to ensure an environment that supports SSL, which puts the onus of a secure implementation squarely back on the organization, not the vendor. The ramifications of implementation of SSL from client all the way to server can include the inadvertent elimination of the ability of other security systems - IDS, IPS, WAF - to perform their tasks unless specifically configured to decrypt, then examine the requests and responses, and then re-encrypt the session before sending it on to the appropriate server. This requires re-architecture on the part of the organization, and careful consideration of the security of systems on which such keys and/or certificates are will be stored. This is important as the compromise of any system storing the keys and/or certificates may lead to the "bad guys" obtaining these important pieces of security architecture, thus rendering any application or system relying upon that data insecure. 3. Encrypted malicious data is still malicious A very wise man told me once that malicious data encrypted is still malicious. Using SSL encryption certainly keeps the "bad guys" from looking at and capturing sensitive data, but as noted in issue number 3 it also keeps security devices from inspecting the exchange in its goal of detecting and preventing malicious data from getting near the web application or web server, where it is likely to do harm. The "bad guys" have the same level of access to those means as do the normal users; this does nothing to prevent the insertion of malicious data but does make it more difficult to detect and prevent, unless the application is requiring client-side certificates, which opens yet another can of worms and can seriously degrade the flexibility of the application in supporting a wide variety of end-user devices. The result, no matter how it is implemented, is security theater at its finest. CWE-319 should not have been included on a list of top "programming errors", and the remediation solutions offered fail to recognize that the majority of the burden of implementation is on the organization, not the software vendor. It fails to recognize the impact of the suggested implementations on the application and the supporting infrastructure, and it is likely to cause more problems than it will solve. The blind adoption of this list as a requirement for procurement by the state of New York, and likely others soon to follow, is little more than a grand gesture designed to send a message not to vendors, but to its customers and, likely, the courts. Certainly requiring software be certified against this list could be considered due-diligence in any lawsuit resulting from the inadvertent leak of sensitive information, thereby proving no negligence on the part of the organization and therefore no liability. While enabling SSL communications is certainly a good idea, it is important to remember that it - like other encryption schemes - is merely obfuscation. It will blindly transport malicious data as easily as it does legitimate data, and failure to adjust internal architectures to deal with SSL across all required security and application delivery devices does little to enhance security in any real meaningful way. Related articles by Zemanta Secure gmail account by turning on https permanently Windows encryption programs open to kernel hack346Views0likes1Comment