f5 sirt
70 TopicsCipher Suite Practices and Pitfalls
Cipher Suite Practices and Pitfalls It seems like every time you turn around there is a new vulnerability to deal with, and some of them, such as Sweet32, have required altering cipher configurations for mitigation. Still other users may tweak their cipher suite settings to meet requirements for PCI compliance, regulatory issues, local compatibility needs, etc. However, once you start modifying your cipher suite settings you must take great care, as it is very easy to shoot yourself in the foot. Many misconfigurations will silently fail – seeming to achieve the intended result while opening up new, even worse, vulnerabilities. Let's take a look at cipher configuration on the F5 BIG-IP products to try stay on the safe path. What is a Cipher Suite? Before we talk about how they're configured, let's define exactly what we mean by 'cipher suite', how it differs from just a 'cipher', and the components of the suite. Wikipedia had a good summary, so rather than reinvent the wheel: A cipher suite is a named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings for a network connection using the Transport Layer Security (TLS) / Secure Sockets Layer (SSL) network protocol. When we talk about configuring ciphers on BIG-IP we're really talking about configuring cipher suites. More specifically the configured list of cipher suites is a menu of options available to be negotiated. Each cipher suite specifies the key exchange algorithm, authentication algorithm, cipher, cipher mode, and MAC that will be used. I recommend reading K15194: Overview of the BIG-IP SSL/TLS cipher suite for more information. But as a quick overview, let's look at a couple of example cipher suites. The cipher suite is in the format: Key Exchange-Authentication-Cipher-Cipher Mode-MAC Note that not all of these components may be explicitly present in the cipher suite, but they are still implicitly part of the suite. Let's consider this cipher suite: ECDHE-RSA-AES256-GCM-SHA384 This breaks down as follows: Key Exchange Algorithm: ECDHE (Elliptic Curve Diffie-Hellman Ephemeral) Authentication Algorithm: RSA Cipher: AES256 (aka AES with a 256-bit key) Cipher Mode: GCM (Galois/Counter Mode) MAC: SHA384 (aka SHA-2 (Secure Hash Algorithm 2) with 384-bit hash) This is arguably the strongest cipher suite we have on BIG-IP at this time. Let's compare that to a simpler cipher suite: AES128-SHA Key Exchange Algorithm: RSA (Implied) – When it isn't specified, presume RSA. Authentication Algorithm: RSA (Implied) – When it isn't specified, presume RSA. Cipher: AES128 (aka AES with a 128-bit key) Cipher Mode: CBC (Cipher Block Chaining) (Implied) – When it isn't specified, presume CBC. MAC: SHA1 (Secure Hash Algorithm 1; SHA-1 always produces a 160-bit hash.) This example illustrates that the cipher suite may not always explicitly specify every parameter, but they're still there. There are 'default' values that are fairly safe to presume when not otherwise specified. If an algorithm isn't specified, it is RSA. That's a safe bet. And if a cipher mode isn't specified it is CBC. Always CBC. Note that all ciphers currently supported on BIG-IP are CBC mode except for AES-GCM and RC4. ALL. I stress this as it has been a recurring source of confusion amongst customers. It isn't only the cipher suites which explicitly state 'CBC' in their name. Let's examine each of these components. This article is primarily about cipher suite configuration and ciphers, and not the SSL/TLS protocol, so I won't dive too deeply here, but I think it helps to have a basic understanding. Forgive me if I simplify a bit. Key Exchange Algorithms As a quick review of the difference between asymmetric key (aka public key) cryptography and symmetric key cryptography: With the asymmetric key you have two keys – K public and K private –which have a mathematical relationship. Since you can openly share the public key there is no need to pre-share keys with anyone. The downside is that these algorithms are computationally expensive. Key lengths for a common algorithm such as RSA are at least 1024-bit, and 2048-bit is really the minimally acceptable these days. Symmetric key has only K private . Both ends use the same key, which poses the problem of key distribution. The advantage is higher computational performance and common key sizes are 128-bit or 256-bit. SSL/TLS, of course, uses both public and private key systems – the Key Exchange Algorithm is the public key system used to exchange the symmetric key. Examples you'll see in cipher suites include ECDHE, DHE, RSA, ECDH, and ADH. Authentication Algorithms The Authentication Algorithm is sometimes grouped in with the Key Exchange Algorithm for configuration purposes; 'ECDHE_RSA' for example. But we'll consider it as a separate component. This is the algorithm used in the SSL/TLS handshake for the server to sign (using the server's private key) elements sent to the client in the negotiation. The client can authenticate them using the server's public key. Examples include: RSA, ECDSA, DSS (aka DSA), and Anonymous. Anonymous means no authentication; this is generally bad. The most common way users run into this is by accidentally enabling an 'ADH' cipher suite. More on this later when we talk about pitfalls. Note that when RSA is used for the key exchange, authentication is inherent to the scheme so there really isn't a separate authentication step. However, most tools will list it out for completeness. Cipher To borrow once again from Wikipedia: In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, 'cipher' is synonymous with 'code', as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. This is what most of us mean when we refer to 'configuring ciphers'. We're primarily interested in controlling the cipher used to protect our information through encryption. There are many, many examples of ciphers which you may be familiar with: DES (Data Encryption Standard), 3DES (Triple DES), AES (Advanced Encryption Standard), RC4 (Rivest Cipher 4), Camellia, RC6, RC2, Blowfish, Twofish, IDEA, SEED, GOST, Rijndael, Serpent, MARS, etc. For a little cipher humor, I recommend RFC2410: The NULL Encryption Algorithm and Its Use With IPsec. Roughly speaking, ciphers come in two types – block ciphers and stream ciphers. Block Ciphers Block ciphers operate on fixed-length chunks of data, or blocks. For example, DES operates on 64-bit blocks while AES operates on 128-bit blocks. Most of the ciphers you'll encounter are block ciphers. Examples: DES, 3DES, AES, Blowfish, Twofish, etc. Stream Ciphers Stream ciphers mathematically operate on each bit in the data flow individually. The most commonly encountered stream cipher is RC4, and that's deprecated. So we're generally focused on block ciphers, not that it really changes anything for the purposes of this article. All of the secrecy in encryption comes from the key that is used, not the cipher itself. Obtain the key and you can unlock the ciphertext. The cipher itself – the algorithm, source code, etc. – not only can be, but should be, openly available. History is full of examples of private cryptosystems failing due to weaknesses missed by their creators, while the most trusted ciphers were created via open processes (AES for example). Keys are of varying lengths and, generally speaking, the longer the key the more secure the encryption. DES only had 56-bits of key data, and thus is considered insecure. We label 3DES as 168-bit, but it is really only equivalent to 112-bit strength. (More on this later.) Newer ciphers, such as AES, often offer options – 128-bits, 192-bits, or 256-bits of key. Remember, a 256-bit key is far more than twice as strong as a 128-bit key. It is 2 128 vs. 2 256 - 3.4028237e+38 vs. 1.1579209e+77 Cipher Mode Cipher mode is the mode of operation used by the cipher when encrypting plaintext into ciphertext, or decrypting ciphertext into plaintext. The most common mode is CBC – Cipher Block Chaining. In cipher block chaining the ciphertext from block n feeds into the process for block n+1 – the blocks are chained together. To steal borrow an image from Wikipedia: As I mentioned previously, all ciphers on BIG-IP are CBC mode except for RC4 (the lone stream cipher, disabled by default starting in 11.6.0) and AES-GCM. AES-GCM was first introduced in 11.5.0, and it is only available for TLSv1.2 connections. GCM stands for Galois/Counter Mode, a more advanced mode of operation than CBC. In GCM the blocks are not chained together. GCM runs in an Authenticated Encryption with Associated Data (AEAD) mode which eliminates the separate per-message hashing step, therefore it can achieve higher performance than CBC mode on a given HW platform. It is also immune to classes of attack that have harried CBC, such as the numerous padding attacks (BEAST, Lucky 13, etc.) Via Wikipedia: The main drawback to AES-GCM is that it was only added in TLSv1.2, so any older clients which don't support TLSv1.2 cannot use it. There are other cipher suites officially supported in TLS which have other modes, but F5 does not currently support those ciphers so we won't get too deep into that. Other ciphers include AES-CCM (CTR mode with a CBC MAC; CTR is Counter Mode), CAMELLIA-GCM (CAMELLIA as introduced in 12.0.0 is CBC), and GOST CNT (aka CTR). We may see these in the future. MAC aka Hash Function What did we ever do before Wikipedia? A hash function is any function that can be used to map data of arbitrary size to data of fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. One use is a data structure called a hash table, widely used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. An example is finding similar stretches in DNA sequences. They are also useful in cryptography. A cryptographic hash function allows one to easily verify that some input data maps to a given hash value, but if the input data is unknown, it is deliberately difficult to reconstruct it (or equivalent alternatives) by knowing the stored hash value. This is used for assuring integrity of transmitted data, and is the building block for HMACs, which provide message authentication. In short, the MAC provides message integrity. Hash functions include MD5, SHA-1 (aka SHA), SHA-2 (aka SHA128, SHA256, & SHA384), and AEAD (Authenticated Encryption with Associated Data). MD5 has long since been rendered completely insecure and is deprecated. SHA-1 is now being 'shamed', if not blocked, by browsers as it is falling victim to advances in cryptographic attacks. While some may need to continue to support SHA-1 cipher suites for legacy clients, it is encouraged to migrate to SHA-2 as soon as possible – especially for digital certificates. Configuring Cipher Suites on BIG-IP Now that we've covered what cipher suites are, let's look at where we use them. There are two distinct and separate areas where cipher suites are used – the host, or control plane, and TMM, or the data plane. On the host side SSL/TLS is handled by OpenSSL and the configuration follows the standard OpenSSL configuration options. Control Plane The primary use of SSL/TLS on the control plane is for httpd. To see the currently configured cipher suite, use ' tmsh list sys http ssl-ciphersuite '. The defaults may vary depending on the version of TMOS. For example, these were the defaults in 12.0.0: tmsh list sys http ssl-ciphersuite sys httpd { ssl-ciphersuite DEFAULT:!aNULL:!eNULL:!LOW:!RC4:!MD5:!EXP } As of 12.1.2 these have been updated to a more explicit list: tmsh list sys http ssl-ciphersuite sys httpd { ssl-ciphersuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES128-SHA256:AES256-SHA256:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:DES-CBC3-SHA } You can change this configuration via ' tmsh modify sys http ssl-ciphersuite <value> '. One important thing to note is that the default is not just 'DEFAULT' as it is on the data plane. This is one thing that users have been caught by; thinking that setting the keyword to 'DEFAULT' will reset the configuration. As OpenSSL provides SSL/TLS support for the control plane, if you want to see which ciphers will actually be supported you can use ' openssl ciphers -v <cipherstring> '. For example: openssl ciphers -v 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES128-SHA256:AES256-SHA256:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:DES-CBC3-SHA' ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1 AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1 AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256 AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256 ECDHE-RSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=RSA Enc=3DES(168) Mac=SHA1 ECDHE-ECDSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=3DES(168) Mac=SHA1 DES-CBC3-SHA SSLv3 Kx=RSA Au=RSA Enc=3DES(168) Mac=SHA1 Now let's see what happens if you use 'DEFAULT': openssl ciphers -v 'DEFAULT' ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 DHE-DSS-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=DSS Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-DSS-AES256-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AES(256) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-DSS-AES256-SHA SSLv3 Kx=DH Au=DSS Enc=AES(256) Mac=SHA1 DHE-RSA-CAMELLIA256-SHA SSLv3 Kx=DH Au=RSA Enc=Camellia(256) Mac=SHA1 DHE-DSS-CAMELLIA256-SHA SSLv3 Kx=DH Au=DSS Enc=Camellia(256) Mac=SHA1 ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD ECDH-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(256) Mac=SHA384 ECDH-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256) Mac=SHA384 ECDH-RSA-AES256-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(256) Mac=SHA1 ECDH-ECDSA-AES256-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256) Mac=SHA1 AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256 AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1 CAMELLIA256-SHA SSLv3 Kx=RSA Au=RSA Enc=Camellia(256) Mac=SHA1 PSK-AES256-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 DHE-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-DSS-AES128-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AES(128) Mac=SHA256 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1 DHE-DSS-AES128-SHA SSLv3 Kx=DH Au=DSS Enc=AES(128) Mac=SHA1 DHE-RSA-SEED-SHA SSLv3 Kx=DH Au=RSA Enc=SEED(128) Mac=SHA1 DHE-DSS-SEED-SHA SSLv3 Kx=DH Au=DSS Enc=SEED(128) Mac=SHA1 DHE-RSA-CAMELLIA128-SHA SSLv3 Kx=DH Au=RSA Enc=Camellia(128) Mac=SHA1 DHE-DSS-CAMELLIA128-SHA SSLv3 Kx=DH Au=DSS Enc=Camellia(128) Mac=SHA1 ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD ECDH-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(128) Mac=SHA256 ECDH-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128) Mac=SHA256 ECDH-RSA-AES128-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(128) Mac=SHA1 ECDH-ECDSA-AES128-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128) Mac=SHA1 AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256 AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1 SEED-SHA SSLv3 Kx=RSA Au=RSA Enc=SEED(128) Mac=SHA1 CAMELLIA128-SHA SSLv3 Kx=RSA Au=RSA Enc=Camellia(128) Mac=SHA1 PSK-AES128-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA1 ECDHE-RSA-RC4-SHA SSLv3 Kx=ECDH Au=RSA Enc=RC4(128) Mac=SHA1 ECDHE-ECDSA-RC4-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=RC4(128) Mac=SHA1 ECDH-RSA-RC4-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=RC4(128) Mac=SHA1 ECDH-ECDSA-RC4-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=RC4(128) Mac=SHA1 RC4-SHA SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=SHA1 RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 PSK-RC4-SHA SSLv3 Kx=PSK Au=PSK Enc=RC4(128) Mac=SHA1 ECDHE-RSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=RSA Enc=3DES(168) Mac=SHA1 ECDHE-ECDSA-DES-CBC3-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=3DES(168) Mac=SHA1 EDH-RSA-DES-CBC3-SHA SSLv3 Kx=DH Au=RSA Enc=3DES(168) Mac=SHA1 EDH-DSS-DES-CBC3-SHA SSLv3 Kx=DH Au=DSS Enc=3DES(168) Mac=SHA1 ECDH-RSA-DES-CBC3-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=3DES(168) Mac=SHA1 ECDH-ECDSA-DES-CBC3-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=3DES(168) Mac=SHA1 DES-CBC3-SHA SSLv3 Kx=RSA Au=RSA Enc=3DES(168) Mac=SHA1 PSK-3DES-EDE-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=3DES(168) Mac=SHA1 EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH Au=RSA Enc=DES(56) Mac=SHA1 EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH Au=DSS Enc=DES(56) Mac=SHA1 DES-CBC-SHA SSLv3 Kx=RSA Au=RSA Enc=DES(56) Mac=SHA1 EXP-EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH(512) Au=RSA Enc=DES(40) Mac=SHA1 export EXP-EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH(512) Au=DSS Enc=DES(40) Mac=SHA1 export EXP-DES-CBC-SHA SSLv3 Kx=RSA(512) Au=RSA Enc=DES(40) Mac=SHA1 export EXP-RC2-CBC-MD5 SSLv3 Kx=RSA(512) Au=RSA Enc=RC2(40) Mac=MD5 export EXP-RC4-MD5 SSLv3 Kx=RSA(512) Au=RSA Enc=RC4(40) Mac=MD5 export As you can see that enables far, far more ciphers, including a number of unsafe ciphers – export, MD5, DES, etc. This is a good example of why you always want to confirm your cipher settings and check exactly what is being enabled before placing new settings into production. Many security disasters could be avoided if everyone doublechecked their settings first. Let’s take a closer look at how OpenSSL represents one of the cipher suites: ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD The columns are: Cipher Suite: ECDHE-RSA-AES256-GCM-SHA384 Protocol: TLSv1.2 Key Exchange Algorithm (Kx): ECDH Authentication Algorithm (Au): RSA Cipher/Encryption Algorithm (Enc): AESGCM(256) MAC (Mac): AEAD Since the control plane uses OpenSSL you can use the standard OpenSSL documentation, so I won't spend a lot of time on that. Data Plane In TMM the cipher suites are configured in the Ciphers field of the Client SSL or Server SSL profiles. See K14783: Overview of the Client SSL profile (11.x - 12.x) & K14806: Overview of the Server SSL profile (11.x - 12.x), respectively for more details. It is important to keep in mind that these are two different worlds with their own requirements and quirks. As most of the configuration activity, and security concerns, occur on the public facing side of the system, we'll focus on the Client SSL Profile. Most of the things we'll cover here will also apply to the Server SSL profile. In the GUI it appears as an editable field: Presuming the profile was created with the name 'Test': tmsh list ltm profile client-ssl Test ltm profile client-ssl Test { app-service none cert default.crt cert-key-chain { default { cert default.crt key default.key } } chain none ciphers DEFAULT defaults-from clientssl inherit-certkeychain true key default.key passphrase none } Modifying the cipher configuration from the command line is simple. tmsh list ltm profile client-ssl Test ciphers ltm profile client-ssl Test { ciphers DEFAULT } tmsh modify ltm profile client-ssl Test ciphers 'DEFAULT:!3DES' tmsh list ltm profile client-ssl Test ciphers ltm profile client-ssl Test { ciphers DEFAULT:!3DES } Just remember the ' tmsh save sys config ' when you're happy with the configuration. Note here the default is just 'DEFAULT'. What that expands to will vary depending on the version of TMOS. K13156: SSL ciphers used in the default SSL profiles (11.x - 12.x) defines the default values for each version of TMOS. Or you can check it locally from the command line: tmm --clientciphers 'DEFAULT' On 12.1.2 that would be: tmm --clientciphers 'DEFAULT' ID SUITE BITS PROT METHOD CIPHER MAC KEYX 0: 159 DHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 EDH/RSA 1: 158 DHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 EDH/RSA 2: 107 DHE-RSA-AES256-SHA256 256 TLS1.2 Native AES SHA256 EDH/RSA 3: 57 DHE-RSA-AES256-SHA 256 TLS1 Native AES SHA EDH/RSA 4: 57 DHE-RSA-AES256-SHA 256 TLS1.1 Native AES SHA EDH/RSA 5: 57 DHE-RSA-AES256-SHA 256 TLS1.2 Native AES SHA EDH/RSA 6: 57 DHE-RSA-AES256-SHA 256 DTLS1 Native AES SHA EDH/RSA 7: 103 DHE-RSA-AES128-SHA256 128 TLS1.2 Native AES SHA256 EDH/RSA 8: 51 DHE-RSA-AES128-SHA 128 TLS1 Native AES SHA EDH/RSA 9: 51 DHE-RSA-AES128-SHA 128 TLS1.1 Native AES SHA EDH/RSA 10: 51 DHE-RSA-AES128-SHA 128 TLS1.2 Native AES SHA EDH/RSA 11: 51 DHE-RSA-AES128-SHA 128 DTLS1 Native AES SHA EDH/RSA 12: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1 Native DES SHA EDH/RSA 13: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1.1 Native DES SHA EDH/RSA 14: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1.2 Native DES SHA EDH/RSA 15: 22 DHE-RSA-DES-CBC3-SHA 168 DTLS1 Native DES SHA EDH/RSA 16: 157 AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 RSA 17: 156 AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 RSA 18: 61 AES256-SHA256 256 TLS1.2 Native AES SHA256 RSA 19: 53 AES256-SHA 256 TLS1 Native AES SHA RSA 20: 53 AES256-SHA 256 TLS1.1 Native AES SHA RSA 21: 53 AES256-SHA 256 TLS1.2 Native AES SHA RSA 22: 53 AES256-SHA 256 DTLS1 Native AES SHA RSA 23: 60 AES128-SHA256 128 TLS1.2 Native AES SHA256 RSA 24: 47 AES128-SHA 128 TLS1 Native AES SHA RSA 25: 47 AES128-SHA 128 TLS1.1 Native AES SHA RSA 26: 47 AES128-SHA 128 TLS1.2 Native AES SHA RSA 27: 47 AES128-SHA 128 DTLS1 Native AES SHA RSA 28: 10 DES-CBC3-SHA 168 TLS1 Native DES SHA RSA 29: 10 DES-CBC3-SHA 168 TLS1.1 Native DES SHA RSA 30: 10 DES-CBC3-SHA 168 TLS1.2 Native DES SHA RSA 31: 10 DES-CBC3-SHA 168 DTLS1 Native DES SHA RSA 32: 49200 ECDHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 ECDHE_RSA 33: 49199 ECDHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 ECDHE_RSA 34: 49192 ECDHE-RSA-AES256-SHA384 256 TLS1.2 Native AES SHA384 ECDHE_RSA 35: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1 Native AES SHA ECDHE_RSA 36: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1.1 Native AES SHA ECDHE_RSA 37: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1.2 Native AES SHA ECDHE_RSA 38: 49191 ECDHE-RSA-AES128-SHA256 128 TLS1.2 Native AES SHA256 ECDHE_RSA 39: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1 Native AES SHA ECDHE_RSA 40: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1.1 Native AES SHA ECDHE_RSA 41: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1.2 Native AES SHA ECDHE_RSA 42: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1 Native DES SHA ECDHE_RSA 43: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1.1 Native DES SHA ECDHE_RSA 44: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1.2 Native DES SHA ECDHE_RSA Some differences when compared to OpenSSL are readily apparent. For starters, TMM kindly includes a column label header, and actually aligns the columns. The first column is simply a 0-ordinal numeric index, the rest are as follows: ID: The official SSL/TLS ID assigned to that cipher suite. SUITE: The cipher suite. BITS: The size of the key in bits. PROT: The protocol supported. METHOD: NATIVE (in TMM) vs. COMPAT (using OpenSSL code). CIPHER: The cipher. MAC: The hash function. KEYX: The Key Exchange and Authentication Algorithms Note that the MAC is a little misleading for AES-GCM cipher suites. There is no separate MAC as they're AEAD. But the hashing algorithm is used in the Pseudo-Random Function (PRF) and a few other handshake related places. Selecting the Cipher Suites Now we know how to look at the current configuration, modify it, and list the actual ciphers that will be enabled by the listed suites. But what do we put into the configuration? Most users won't have to touch this. The default values are carefully selected by F5 to meet the needs of the majority of our customers. That's the good news. The bad news is that some customers will need to get in there and change the configuration – be it for regulatory compliance, internal policies, legacy client support, etc. Once you begin modifying them, the configuration is truly custom for each customer. Every customer who modifies the configuration, and uses a custom cipher configuration, needs to determine what the proper list is for their needs. Let's say we have determined that we need to support only AES & AES-GCM, 128-bit or 256-bit, and only ECDHE key exchange. Any MAC or Authentication is fine. OK, let's proceed from there. On 12.1.2 there are six cipher suites that fit those criteria. We could list them all explicitly: tmm --clientciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-CBC-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-CBC-SHA' That will work, but it gets unwieldy fast. Not only that, but in versions up to 11.5.0 the ciphers configuration string was truncated at 256bytes. Starting in 11.5.0 that was increased to 768bytes, but that can still truncate long configurations. We'll revisit this when we get to the pitfalls section. Fortunately, there is an alternative – keywords! This will result in the same list of cipher suites: tmm --clientciphers 'ECDHE+AES-GCM:ECDHE+AES' That specifies the ECDHE key exchange with AES-GCM ciphers, and ECDHE with AES ciphers. Let's take a closer look to help understand what is happening here. Keywords Keywords are extremely important when working with cipher suite configuration, so we'll spend a little time on those. Most of these apply to both the control plane (OpenSSL) and the data plane (TMM), unless otherwise noted, but we're focused on the data plane as that's F5 specific. Keywords organize into different categories. F5 specific: NATIVE: cipher suites implemented natively in TMM COMPAT: cipher suites using OpenSSL code; removed as of 12.0.0 @SPEED: Re-orders the list to put 'faster' (based on TMOS implementation performance) ciphers first. Sorting: @SPEED: Re-orders the list to put 'faster' (based on TMOS implementation performance) ciphers first. (F5 Specific) @STRENGTH: Re-orders the list to put 'stronger' (larger keys) ciphers first. Protocol: TLSv1_2: cipher suites available under TLSv1.2 TLSv1_1: cipher suites available under TLSv1.1 TLSv1: cipher suites available under TLSv1.0 SSLv3: cipher suites available under SSLv3 Note the 'Protocol' keywords in the cipher configuration control the ciphers associated with that protocol, and not the protocol itself! More on this in pitfalls. Key Exchange Algorithms (sometimes with Authentication specified): ECDHE or ECDHA_RSA: Elliptic Curve Diffie-Hellman Ephemeral (with RSA) ECDHE_ECDSA: ECDHE with Elliptic Curve Digital Signature Algorithm DHE or EDH: Diffie-Hellman Ephemeral (aka Ephemeral Diffie-Hellman) (with RSA) DHE_DSS: DHE with Digital Signature Standard (aka DSA – Digital Signature Algorithm) ECDH_RSA: Elliptic Curve Diffie-Hellman with RSA ECDH_ECDSA: ECDH with ECDSA RSA: RSA, obviously ADH: Anonymous Diffie-Hellman. Note the Authentication Algorithms don't work as standalone keywords in TMM. You can't use 'ECDSA' or 'DSS' for example. And you might think ECDHE or DHE includes all such cipher suites – note that they don't if you read carefully. General cipher groupings: DEFAULT: The default cipher suite for that version; see K13156 ALL: All NATIVE cipher suites; does not include COMPAT in current versions HIGH: 'High' security cipher suites; >128-bit MEDIUM: 'Medium' security cipher suites; effectively 128-bit suites LOW: 'Low' security cipher suites; <128-bit excluding export grade ciphers EXP or EXPORT: Export grade ciphers; 40-bit or 56-bit EXPORT56: 56-bit export ciphers EXPORT40: 40-bit export ciphers Note that DEFAULT does change periodically as F5 updates the configuration to follow the latest best practices. K13156: SSL ciphers used in the default SSL profiles (11.x - 12.x) documents these changes. Cipher families: AES-GCM: AES in GCM mode; 128-bit or 256-bit AES: AES in CBC mode; 128-bit or 256-bit CAMELLIA: Camellia in CBC mode; 128-bit or 256-bit 3DES: Triple DES in CBC mode; 168-bit (well, 112-bit really) DES: Single DES in CBC mode, includes EXPORTciphers;40-bit & 56-bit. RC4: RC4 stream cipher NULL: NULL cipher; just what it sounds like, it does nothing – no encryption MAC aka Hash Function: SHA384: SHA-2 384-bit hash SHA256: SHA-2 256-bit hash SHA1 or SHA: SHA-1 160-bit hash MD5: MD5 128-bit hash Other: On older TMOS versions when using the COMPAT keyword it also enables two additional keywords: SSLv2: Ciphers supported on the SSLv2 protocol RC2: RC2 ciphers. So, let's go back to our example: tmm --clientciphers 'ECDHE+AES-GCM:ECDHE+AES' Note that you can combine keywords using '+' (plus sign). And multiple entries in the ciphers configuration line are separated with ':' (colon). You may also need to wrap the string in single quotes on the command line – I find it is a good habit to just always do so. We can also exclude suites or keywords. There are two ways to do that: '!' (exclamation point) is a hard exclusion. Anything excluded this way cannot be implicitly or explicitly re-enabled. It is disabled, period. '-' (minus sign or dash) is a soft exclusion. Anything excluded this way can be explicitly re-enabled later in the configuration string. (Note: The dash is also usedinthe names of many cipher suites, such as ECDHE-RSA-AES256-GCM-SHA384 or AES128-SHA. Do not confuse the dashes that are part of the cipher suite names with a soft exclusion, which alwaysprecedes, or prefixes,the value being excluded. 'AES128-SHA': AES128-SHA cipher suite. '-SHA': SHA is soft excluded. '-AES128-SHA': the AES128-SHA cipher suite is soft excluded. Position matters.) Let's look at the difference in hard and soft exclusions. We'll start with our base example: tmm --clientciphers 'ECDHE+AES-GCM:DHE+AES-GCM' ID SUITE BITS PROT METHOD CIPHER MAC KEYX 0: 49200 ECDHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 ECDHE_RSA 1: 49199 ECDHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 ECDHE_RSA 2: 159 DHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 EDH/RSA 3: 158 DHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 EDH/RSA Now let's look at a hard exclusion: tmm --clientciphers 'ECDHE+AES-GCM:!DHE:DHE+AES-GCM' ID SUITE BITS PROT METHOD CIPHER MAC KEYX 0: 49200 ECDHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 ECDHE_RSA 1: 49199 ECDHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 ECDHE_RSA And lastly a soft exclusion: tmm --clientciphers 'ECDHE+AES-GCM:-DHE:DHE+AES-GCM' ID SUITE BITS PROT METHOD CIPHER MAC KEYX 0: 49200 ECDHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 ECDHE_RSA 1: 49199 ECDHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 ECDHE_RSA 2: 159 DHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 EDH/RSA 3: 158 DHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 EDH/RSA Note that in the second example, the hard exclusion, we used '!DHE' and even though we then explicitly added 'DHE+AES-GCM' those ciphers were not enabled. This is because, once excluded with a hard exclusion, ciphers cannot be re-enabled. In the third example, the soft exclusion, we used '-DHE' and then 'DHE+AES-GCM'. This time it did enable those ciphers, which is possible with a soft exclusion. You might be wondering what soft disabling is useful for; why would you ever want to remove ciphers only to add them again? Reordering the ciphers is a common use case. As an example, DEFAULT orders ciphers differently in different versions, but mainly based on strength – bit size. Let's say we know 3DES is really 112-bit equivalent strength and not 168-bit as it is usually labeled. For some reason, maybe legacy clients, we can't disable them, but we want them to be last on the list. One way to do this is to first configure the DEFAULT list, then remove all of the 3DES ciphers. But then add the 3DES ciphers back explicitly – at the end of the list. Let's try it – compare the following: tmm --clientciphers 'DEFAULT' ID SUITE BITS PROT METHOD CIPHER MAC KEYX 0: 159 DHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 EDH/RSA 1: 158 DHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 EDH/RSA 2: 107 DHE-RSA-AES256-SHA256 256 TLS1.2 Native AES SHA256 EDH/RSA 3: 57 DHE-RSA-AES256-SHA 256 TLS1 Native AES SHA EDH/RSA 4: 57 DHE-RSA-AES256-SHA 256 TLS1.1 Native AES SHA EDH/RSA 5: 57 DHE-RSA-AES256-SHA 256 TLS1.2 Native AES SHA EDH/RSA 6: 57 DHE-RSA-AES256-SHA 256 DTLS1 Native AES SHA EDH/RSA 7: 103 DHE-RSA-AES128-SHA256 128 TLS1.2 Native AES SHA256 EDH/RSA 8: 51 DHE-RSA-AES128-SHA 128 TLS1 Native AES SHA EDH/RSA 9: 51 DHE-RSA-AES128-SHA 128 TLS1.1 Native AES SHA EDH/RSA 10: 51 DHE-RSA-AES128-SHA 128 TLS1.2 Native AES SHA EDH/RSA 11: 51 DHE-RSA-AES128-SHA 128 DTLS1 Native AES SHA EDH/RSA 12: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1 Native DES SHA EDH/RSA 13: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1.1 Native DES SHA EDH/RSA 14: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1.2 Native DES SHA EDH/RSA 15: 22 DHE-RSA-DES-CBC3-SHA 168 DTLS1 Native DES SHA EDH/RSA 16: 157 AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 RSA 17: 156 AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 RSA 18: 61 AES256-SHA256 256 TLS1.2 Native AES SHA256 RSA 19: 53 AES256-SHA 256 TLS1 Native AES SHA RSA 20: 53 AES256-SHA 256 TLS1.1 Native AES SHA RSA 21: 53 AES256-SHA 256 TLS1.2 Native AES SHA RSA 22: 53 AES256-SHA 256 DTLS1 Native AES SHA RSA 23: 60 AES128-SHA256 128 TLS1.2 Native AES SHA256 RSA 24: 47 AES128-SHA 128 TLS1 Native AES SHA RSA 25: 47 AES128-SHA 128 TLS1.1 Native AES SHA RSA 26: 47 AES128-SHA 128 TLS1.2 Native AES SHA RSA 27: 47 AES128-SHA 128 DTLS1 Native AES SHA RSA 28: 10 DES-CBC3-SHA 168 TLS1 Native DES SHA RSA 29: 10 DES-CBC3-SHA 168 TLS1.1 Native DES SHA RSA 30: 10 DES-CBC3-SHA 168 TLS1.2 Native DES SHA RSA 31: 10 DES-CBC3-SHA 168 DTLS1 Native DES SHA RSA 32: 49200 ECDHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 ECDHE_RSA 33: 49199 ECDHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 ECDHE_RSA 34: 49192 ECDHE-RSA-AES256-SHA384 256 TLS1.2 Native AES SHA384 ECDHE_RSA 35: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1 Native AES SHA ECDHE_RSA 36: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1.1 Native AES SHA ECDHE_RSA 37: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1.2 Native AES SHA ECDHE_RSA 38: 49191 ECDHE-RSA-AES128-SHA256 128 TLS1.2 Native AES SHA256 ECDHE_RSA 39: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1 Native AES SHA ECDHE_RSA 40: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1.1 Native AES SHA ECDHE_RSA 41: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1.2 Native AES SHA ECDHE_RSA 42: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1 Native DES SHA ECDHE_RSA 43: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1.1 Native DES SHA ECDHE_RSA 44: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1.2 Native DES SHA ECDHE_RSA tmm --clientciphers 'DEFAULT:-3DES:!SSLv3:3DES+ECDHE:3DES+DHE:3DES+RSA' ID SUITE BITS PROT METHOD CIPHER MAC KEYX 0: 159 DHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 EDH/RSA 1: 158 DHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 EDH/RSA 2: 107 DHE-RSA-AES256-SHA256 256 TLS1.2 Native AES SHA256 EDH/RSA 3: 57 DHE-RSA-AES256-SHA 256 TLS1 Native AES SHA EDH/RSA 4: 57 DHE-RSA-AES256-SHA 256 TLS1.1 Native AES SHA EDH/RSA 5: 57 DHE-RSA-AES256-SHA 256 TLS1.2 Native AES SHA EDH/RSA 6: 57 DHE-RSA-AES256-SHA 256 DTLS1 Native AES SHA EDH/RSA 7: 103 DHE-RSA-AES128-SHA256 128 TLS1.2 Native AES SHA256 EDH/RSA 8: 51 DHE-RSA-AES128-SHA 128 TLS1 Native AES SHA EDH/RSA 9: 51 DHE-RSA-AES128-SHA 128 TLS1.1 Native AES SHA EDH/RSA 10: 51 DHE-RSA-AES128-SHA 128 TLS1.2 Native AES SHA EDH/RSA 11: 51 DHE-RSA-AES128-SHA 128 DTLS1 Native AES SHA EDH/RSA 12: 157 AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 RSA 13: 156 AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 RSA 14: 61 AES256-SHA256 256 TLS1.2 Native AES SHA256 RSA 15: 53 AES256-SHA 256 TLS1 Native AES SHA RSA 16: 53 AES256-SHA 256 TLS1.1 Native AES SHA RSA 17: 53 AES256-SHA 256 TLS1.2 Native AES SHA RSA 18: 53 AES256-SHA 256 DTLS1 Native AES SHA RSA 19: 60 AES128-SHA256 128 TLS1.2 Native AES SHA256 RSA 20: 47 AES128-SHA 128 TLS1 Native AES SHA RSA 21: 47 AES128-SHA 128 TLS1.1 Native AES SHA RSA 22: 47 AES128-SHA 128 TLS1.2 Native AES SHA RSA 23: 47 AES128-SHA 128 DTLS1 Native AES SHA RSA 24: 49200 ECDHE-RSA-AES256-GCM-SHA384 256 TLS1.2 Native AES-GCM SHA384 ECDHE_RSA 25: 49199 ECDHE-RSA-AES128-GCM-SHA256 128 TLS1.2 Native AES-GCM SHA256 ECDHE_RSA 26: 49192 ECDHE-RSA-AES256-SHA384 256 TLS1.2 Native AES SHA384 ECDHE_RSA 27: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1 Native AES SHA ECDHE_RSA 28: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1.1 Native AES SHA ECDHE_RSA 29: 49172 ECDHE-RSA-AES256-CBC-SHA 256 TLS1.2 Native AES SHA ECDHE_RSA 30: 49191 ECDHE-RSA-AES128-SHA256 128 TLS1.2 Native AES SHA256 ECDHE_RSA 31: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1 Native AES SHA ECDHE_RSA 32: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1.1 Native AES SHA ECDHE_RSA 33: 49171 ECDHE-RSA-AES128-CBC-SHA 128 TLS1.2 Native AES SHA ECDHE_RSA 34: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1 Native DES SHA ECDHE_RSA 35: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1.1 Native DES SHA ECDHE_RSA 36: 49170 ECDHE-RSA-DES-CBC3-SHA 168 TLS1.2 Native DES SHA ECDHE_RSA 37: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1 Native DES SHA EDH/RSA 38: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1.1 Native DES SHA EDH/RSA 39: 22 DHE-RSA-DES-CBC3-SHA 168 TLS1.2 Native DES SHA EDH/RSA 40: 22 DHE-RSA-DES-CBC3-SHA 168 DTLS1 Native DES SHA EDH/RSA 41: 10 DES-CBC3-SHA 168 TLS1 Native DES SHA RSA 42: 10 DES-CBC3-SHA 168 TLS1.1 Native DES SHA RSA 43: 10 DES-CBC3-SHA 168 TLS1.2 Native DES SHA RSA 44: 10 DES-CBC3-SHA 168 DTLS1 Native DES SHA RSA I added something else in there which I'll come back to later. Pitfalls As should be clear by now cipher configuration is a powerful tool, but as the song says, every tool is a weapon if you hold it right. And weapons are dangerous. With a little careless handling it is easy to lose a toe – or a leg. Whenever you are working with cipher suite configuration the old rule of 'measure twice, cut once' applies – and then double-check the work to be certain. There are several common pitfalls which await you. Misuse Perhaps the most common pitfall is simply misuse – using cipher suite configuration for that which it is not intended. And the single most common example of this comes from using cipher configuration to manipulate protocols. Given the keywords, as described above, it seems common for users to presume that if they want to disable a protocol, such as TLSv1.0, then the way to do that is to use a cipher suite keyword, such as !TLSv1. And, indeed, this may seem to work – but it isn't doing what is desired. The protocol is not disabled, only the ciphers that are supported for that protocol are. The protocol is configured on the VIP independently of the ciphers. !TLSv1 would disable all ciphers supported under the TLSv1.0 protocol, but not the protocol itself. Note that the protocol negotiation and the cipher negotiation in the SSL/TLS handshake are independent. What happens if the VIP only supports TLSv1.0/v1.1/v1.2 and the client only supports SSLv3 & TLSv1.0? Well, they'd agree on TLSv1.0 as the common protocol. The cipher list the client sends in the Client Hello is independent of the protocol that is eventually negotiated. Say the client sends AES128-SHA and the server has that in its list, so it is selected. OK, we've agreed on a protocol and a cipher suite – only the server won't do any ciphers on TLSv1.0 because of '!TLSv1' in the ciphers configuration, and the connection will fail. That may seem like splitting hairs, but it makes a difference. If a scanner is looking for protocols that are enabled, and not the full handshake, it may still flag a system which has been configured this way. The protocol is negotiated during the SSL/TLS handshake before the cipher is selected. This also means the system is doing more work, as the handshake continues further before failing, and the log messages may be misleading. Instead of logging a protocol incompatibility the logs will reflect the failure to find a viable cipher, which can be a red herring when it comes time to debug the configuration. The right way to do this is to actually disable the protocol, which doesn't involve the cipher suite configuration at all. For the control plane this is done through the ssl-protocol directive: tmsh list sys http ssl-protocol sys httpd { ssl-protocol "all -SSLv2 -SSLv3" } For example, if we wanted to disable TLSv1.0: tmsh modify sys http ssl-protocol 'all -SSLv2 -SSLv3 -TLSv1' tmsh list sys http ssl-protocol sys httpd { ssl-protocol "all -SSLv2 -SSLv3 -TLSv1" } For the data plane this can be done via the Options List in the SSL Profile GUI, via the No SSL, No TLSv1.1, etc. directives: Or via the command line: tmsh list ltm profile client-ssl Test options ltm profile client-ssl Test { options { dont-insert-empty-fragments } } tmsh modify ltm profile client-ssl Test options {dont-insert-empty-fragments no-tlsv1} tmsh list /ltm profile client-ssl Test options ltm profile client-ssl Test { options { dont-insert-empty-fragments no-tlsv1 } } The values are slightly different on the command line, use this command to see them all: tmsh modify ltm profile client-ssl <profile-name> options ? Use the right tool for the job and you'll be more likely to succeed. Truncation As I previously mentioned, in versions up to 11.5.0 the ciphers configuration string was truncated at 256 bytes. Starting in 11.5.0 that was increased to 768 bytes (see K11481: The SSL profile cipher lists have a 256 character limitation for more information), but that can still silently truncate long configurations. This is not a theoretical issue, we've seen users run into this in the real world. For example, little over a year ago I worked with a customer who was then using 11.4.1 HF8. They were trying to very precisely control which ciphers were enabled, and their order. In order to do this they'd decided to enumerate every individual cipher in their configuration – resulting in this cipher suite configuration string: TLSv1_2+ECDHE-RSA-AES256-CBC-SHA:TLSv1_1+ECDHE-RSA-AES256-CBC-SHA:TLSv1_2+ECDHE-RSA-AES128-CBC-SHA:TLSv1_1+ECDHE-RSA-AES128-CBC-SHA:TLSv1_2+DHE-RSA-AES256-SHA:TLSv1_1+DHE-RSA-AES256-SHA:TLSv1_2+DHE-RSA-AES128-SHA:TLSv1_1+DHE-RSA-AES128-SHA:TLSv1_2+AES256-SHA256:TLSv1_1+AES256-SHA:TLSv1_2+AES128-SHA256:TLSv1_1+AES128-SHA:TLSv1+ECDHE-RSA-AES256-CBC-SHA:TLSv1+ECDHE-RSA-AES128-CBC-SHA:TLSv1+DHE-RSA-AES256-SHA:TLSv1+DHE-RSA-AES128-SHA:TLSv1+AES256-SHA:TLSv1+AES128-SHA:TLSv1+DES-CBC3-SHA That string would save in the configuration and it was there if you looked at the bigip.conf file, but it was silently truncated when the configuration was loaded. Since this was 11.4.1, only the first 256 bytes were loaded successfully, which made the running configuration: TLSv1_2+ECDHE-RSA-AES256-CBC-SHA:TLSv1_1+ECDHE-RSA-AES256-CBC-SHA:TLSv1_2+ECDHE-RSA-AES128-CBC-SHA:TLSv1_1+ECDHE-RSA-AES128-CBC-SHA:TLSv1_2+DHE-RSA-AES256-SHA:TLSv1_1+DHE-RSA-AES256-SHA:TLSv1_2+DHE-RSA-AES128-SHA:TLSv1_1+DHE-RSA-AES128-SHA:TLSv1_2+AES256-S Note the last suite is truncated itself, which means it was invalid and therefore ignored. If their configuration had worked they would've had nineteen protocol+suite combinations – instead they had eight. Needless to say, this caused some problems. This customer was missing ciphers that they expected to have working. That is bad enough – but it could be worse. Let's imagine a customer who wants to specify several specific ciphers first, then generally enable a number of other TLSv1.2 & TLSv1.1 ciphers. And, of course, they are careful to disable dangerous ciphers! TLSv1_2+ECDHE-RSA-AES256-CBC-SHA:TLSv1_1+ECDHE-RSA-AES256-CBC-SHA:TLSv1_2+ECDHE-RSA-AES128-CBC-SHA:TLSv1_1+ECDHE-RSA-AES128-CBC-SHA:TLSv1_2+DHE-RSA-AES256-SHA:TLSv1_1+DHE-RSA-AES256-SHA:TLSv1_2+DHE-RSA-AES128-SHA:TLSv1_1+DHE-RSA-AES128-SHA:TLSv1_2:TLSv1_1:!RC4:!MD5:!ADH:!DES:!EXPORT OK, that looks fairly solid, right? What do you suppose the problem with this is? This is the problem; in 11.4.1 and earlier it would truncate to this: TLSv1_2+ECDHE-RSA-AES256-CBC-SHA:TLSv1_1+ECDHE-RSA-AES256-CBC-SHA:TLSv1_2+ECDHE-RSA-AES128-CBC-SHA:TLSv1_1+ECDHE-RSA-AES128-CBC-SHA:TLSv1_2+DHE-RSA-AES256-SHA:TLSv1_1+DHE-RSA-AES256-SHA:TLSv1_2+DHE-RSA-AES128-SHA:TLSv1_1+DHE-RSA-AES128-SHA:TLSv1_2:TLSv1_1: All of the exclusions were truncated off! Now we have the opposite problem – there are a number of ciphers enabled which the customer expects to be disabled! And they're BAD ciphers – ADH, DES, MD5, RC4. So this customer would be at high risk without realizing it. Be aware of this; it is very sneaky. The configuration will look fine; the truncation happens in the code when it loads the configuration. This is also one reason why I always recommend listing your exclusions first in the configuration string. Then you can never accidentally enable something. Unintended Consequences Let's say a new CVE is announced which exposes a very serious vulnerability in SSLv3 & TLSv1.0. There is no way to mitigate it, and the only solution is to limit connections to only TLSv1.1 & TLSv1.2. You want a cipher configuration to accomplish this. It seems straight-forward – just configure it to use only ciphers on TLSv1.1 & TLSv1.2: tmsh modify ltm profile client-ssl <profile> ciphers 'TLSv1_2:TLSv1_1' Congratulations, you've solved the problem. You are no longer vulnerable to this CVE. You know there is a but coming, right? What's wrong? Well, you just enabled all TLSv1.2 & TLSv1.1 ciphers. That includes such gems as RC4-MD5, RC4-SHA, DES, and a few ADH (Anonymous Diffie-Hellman) suites which have no authentication. As recently as 11.3.0 you'd even be enabling some 40-bit EXPORT ciphers. (We pulled them out of NATIVE in 11.4.0.) So you just leapt out of the frying pan and into the fire. Always, always, always check the configuration before using it. Running that through tmm --clientciphers 'TLSv1_2:TLSv1_1' would've raised red flags. Instead, this configuration would work without causing those problems: tmsh modify ltm profile client-ssl <profile> ciphers 'DEFAULT:!TLSv1:!SSLv3' Another option, and probably the better one, is to disable the SSLv3 and TLSv1.0 protocols on the VIP. As I discussed above. Of course, you can do both – belt and suspenders. And just to show you how easy it is to make such a mistake, F5 did this! In K13400: SSL 3.0/TLS 1.0 BEAST vulnerability CVE-2011-3389 and TLS protocol vulnerability CVE-2012-1870 we originally had the following in the mitigation section: Note: Alternatively, to configure an SSL profile to use only TLS 1.1-compatible, TLS 1.2-compatible, AES-GCM, or RC4-SHA ciphers using the tmsh utility, use the following syntax: tmsh create /ltm profile client-ssl <name> ciphers TLSv1_1:TLSv1_2:AES-GCM:RC4-SHA Yes, I had this fixed long ago. Remember back in the section on keywords I had this comparison example: tmm --clientciphers 'DEFAULT' tmm --clientciphers 'DEFAULT:-3DES:!SSLv3:3DES+ECDHE:3DES+DHE:3DES+RSA' Who caught the '!SSLv3' in the second line? Why do you think I added that? Did I need to? Hint: What do you think the side effect of blanket enabling all of those 3DES ciphers would be if I didn't explicitly disable SSLv3? Cipher Ordering In SSL/TLS there are two main models to the cipher suite negotiation – Server Cipher Preference or Client Cipher Preference. What does this mean? In SSL/TLS the client sends the list of cipher suites it is willing and able to support in the Client Hello. The server also has its list of cipher suites that it is willing and able to support. In Client Cipher Preference the server will select the first cipher on the client's list that is also in the server's list. Effectively this gives the client influence over which cipher is selected based on the order of the list it sends. In Server Cipher Preference the server will select the first server on its own list that is also on the client's list. So the server gives the order of its list precedence. BIG-IP always operates in Server Cipher Preference, so be very careful in how you order your cipher suites. Preferred suites should go at the top of the list. How you order your cipher suites will directly affect which ciphers are used. It doesn't matter if a stronger cipher is available if a weak cipher is matched first. HTTP/2 How is HTTP/2 a pitfall? The HTTP/2 RFC7540 includes a blacklist of ciphers that are valid in TLS, but should not be used in HTTP/2. This can cause a problem on a server where the TLS negotiation is decoupled from the ALPN exchange for the higher level protocol. The server might select a cipher which is on the blacklist, and then when the connection attempts to step up to HTTP/2 via ALPN the client may terminate the connection with extreme prejudice. It is well known enough to be called out in the RFC – Section 9.2.2. F5 added support for HTTP/2 in 12.0.0 – and we fell into this trap. Our DEFAULT ciphers list was ordered such that it was almost certain a blacklisted cipher would be selected.; This was fixed in 12.0.0 HF3 and 12.1.0, but serves as an example. On 12.0.0 FINAL through 12.0.0 HF2 a simple fix was to configure the ciphers to be 'ECDHE+AES-GCM:DEFAULT'. ECDHE+AES-GCM is guaranteed to be supported by any client compliant with RFC7540 (HTTP/2). Putting it first ensures it is selected before any blacklisted cipher. 3DES Back in the section on ciphers I mentioned that we label 3DES as being 168-bit, but that it only provides the equivalent of 112-bit strength. So, what did I mean by that? DES operates on 64-bit data blocks, using 56-bits of key. So it has a strength of 2 56 . 3DES, aka Triple DES, was a stop-gap designed to stretch the life of DES once 56-bits was too weak to be safe, until AES became available. 3DES use the exact same DES cipher, it just uses it three times – hence the name. So you might think 3x56-bits = 168-bits. 2 168 strong. Right? No, not really. The standard implementation of 3DES is known as EDE – for Encrypt, Decrypt, Encrypt. (For reasons we don't need to get into here.) You take the 64-bit data block, run it through DES once to encrypt it with K 1 , then run it through again to decrypt it using K 2 , then encrypt it once again using K 3 . Three keys, that's still 168-bits, right? Well, you'd think so. But the devil is in the (implementation) details. First of all there are three keying options for 3DES: - Keying option 1: K1, K2, K3 – 168 unique bits (but only 112-bit strength!) - Keying option 2: K1, K2, K1 – 112 unique bits (but only 80-bit strength!) - Keying option 3: K1, K1, K1 – 56 unique bits, 56-bit strength (Equivalent to DES due to EDE!) F5 uses keying option one, so we have 168-bits of unique key. However, 3DES with keying option one is subject to a meet-in-the-middle cryptographic attack which only has a cost of 2 112 . It has even been reduced as low as 2 108 , as described in this paper. So it does not provide the expected 168-bits of security, and is in fact weaker than AES128. To add some confusion, due to an old issue we used to describe 3DES as being 192-bit. See: K17296: The BIG-IP system incorrectly reports a 192-bit key length for cipher suites using 3DES (DES-CBC3) for more details. Of course, with the appearance of the Sweet32 attack last fall I would encourage everyone to disable 3DES completely whenever possible. We're also seeing a growing number of scanners and audit tools recategorizing 3DES as a 'Medium' strength cipher, down from 'High', and correspondingly lowering the grade for any site still supporting it. If you don't need it, turn it off. See K13167034: OpenSSL vulnerability CVE-2016-2183 for more information. Conclusion Believe it or not, that's the quick overview of cipher suite configuration on BIG-IP. There are many areas where we could dig in further and spend some time in the weeds, but I hope that this article helps at least one person understand cipher suite configuration better, and to avoid the pitfalls that commonly claim those who work with them. Additional Resources This article is by no means comprehensive, and for those interested I'd encourage additional reading: BIG-IP SSL Cipher History by David Holmes, here on DevCentral Cipher Rules And Groups in BIG-IP v13 by Chase Abbott, also on DevCentral OpenSSL Cipher Documentation K8802: Using SSL ciphers with BIG-IP Client SSL and Server SSL profiles K15194: Overview of the BIG-IP SSL/TLS cipher suite K13163: SSL ciphers supported on BIG-IP platforms (11.x - 12.x) K13156: SSL ciphers used in the default SSL profiles (11.x - 12.x) K17370: Configuring the cipher strength for SSL profiles (12.x) K13171: Configuring the cipher strength for SSL profiles (11.x) K14783: Overview of the Client SSL profile (11.x - 12.x) K14806: Overview of the Server SSL profile (11.x - 12.x)21KViews9likes17CommentsF5 402 Exam reading list and notes
Disclaimer: The collection of articles and documentation are credited to original owners. This is not an official F5 402 exam guide. I recently passed the F5 402 - Certified Solution Expert - Cloud exam. I am pleased that I finally achieved it. Many are asking what I used to prepare for the exam. First, be familiar with the 402 - CLOUD SOLUTIONS EXAM BLUEPRINT. It is located at K29900360: F5 certification | Exams and blueprints. https://support.f5.com/csp/article/K29900360 The pre requisite to take the F5 402 exam is that you are currently a F5 CTS for LTM (301a and 301b) and DNS (302). These exams would have already exposed you to BIG-IP LTM and DNS. However, you should also read on and have an idea what are the other BIG-IP modules and their functionality. The F5 402 exam blueprint already gives you the topics you will need to familiar with. It really helps if you have hands on experience on working on cloud environments, such as AWS and Azure, and container environments such as in Kubernetes. For me, it was a bit of AWS and Kubernetes. You will need to be familiar with cloud terminologies - services, features, etc - and how they relate to cloud vendors. Familiarity with container orchestration terminologies such as in Kubernetes will also help. Bundle these Cloud/Container terms and features and how they relate to BIG-IP deployments in the cloud, plus, mapping them per the F5 402 exam blueprint, will help you organize your knowledge and prepare for the exam. Looking back and while preparing for the exam, here are the documentation which I would start to review and build a knowledge map. There are links in the articles that would supplement the concepts described, my suggestion, consult the F5 402 exam blueprint and see if you need more familiarity with a topic after reading thru the articles. https://clouddocs.f5.com/cloud/public/v1/ https://clouddocs.f5.com/cloud/public/v1/aws_index.html https://clouddocs.f5.com/cloud/public/v1/azure_index.html https://clouddocs.f5.com/cloud/public/v1/matrix.html https://clouddocs.f5.com/containers/latest/ https://aws.amazon.com/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/ https://www.f5.com/company/blog/networking-in-the-age-of-containers https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/ https://docs.microsoft.com/en-us/azure/architecture/aws-professional/services Good Luck!6.7KViews13likes8CommentsMitigating JSON-based SQL injection with BIG-IP ASM / Advanced WAF Attack Signatures
Recently, news and research about WAF bypass technique using JSON-based SQL syntax are making rounds in the interwebs. Claroty have published their research on this topic. https://claroty.com/team82/research/js-on-security-off-abusing-json-based-sql-to-bypass-waf The Claroty team reached out to the F5 SIRT and shared this research. Promptly, F5 released attack signatures for these JSON-based SQL injections back in March 2022 and documented them at K22788490: F5 SIRT Security Researcher Acknowledgement – Attack Signature Improvement. https://support.f5.com/csp/article/K22788490 The Attack Signature IDs and Attack Signature Update (ASU) filenames and recommendation are documented in K000129977: BIG-IP ASM / Advanced WAF Attack Signatures for JSON-based SQL Injection for customers looking for this information in MyF5/AskF5. https://my.f5.com/manage/s/article/K000129977 Attack Signature ID Name Attack Type Description 200102058 New SQL-INJ expressions like "AND 1=1" (Postgres JSON) (Parameter) SQL-Injection SQL-Injection using Postgres JSON operators 200102059 New SQL-INJ expressions like "AND 1=1" (Postgres JSON) (Header) SQL-Injection SQL-Injection using Postgres JSON operators 200102060 New SQL-INJ expressions like "AND 1=1" (Postgres JSON) (URI) SQL-Injection SQL-Injection using Postgres JSON operators 200102061 New SQL-INJ expressions like "OR 1=1" (Postgres JSON) (Parameter) SQL-Injection SQL-Injection using Postgres JSON operators 200102062 New SQL-INJ expressions like "OR 1=1" (Postgres JSON) (Header) SQL-Injection SQL-Injection using Postgres JSON operators 200102063 New SQL-INJ expressions like "OR 1=1" (Postgres JSON) (URI) SQL-Injection SQL-Injection using Postgres JSON operators Attack Signature Update (ASU) filenames (released back in March 2022): ASM-SignatureFile_20220315_113554.im ASM-AttackSignatures_20220315_113554.im These Attack Signatures for JSON-based SQL injection are part of the SQL Injection and Low Accuracy Attack Signature sets, so be sure to keep your Attack Signatures updated and include these signatures (through the Attack Signature Sets) in your BIG-IP ASM / Advanced WAF Security Policy. In this sample BIG-IP ASM / Advanced WAF Security Policy, the SQL Injection Attack Signature Set is configured, and this will include the JSON-based SQL Injection attack signatures among others. Testing the Attack Signatures From the Claroty research, they shared a sample HTTP URL with the JSON-based SQL injection. Here’s a simplified sample: http://site/?a=" OR '{"b":2}'::jsonb <@ '{"a":1, "b":2}'::jsonb union select ASCII(s.token) from unnest" the [ OR '{"b":2}'::jsonb <@ '{"a":1, "b":2}'::jsonb] is the WAF bypass technique that uses JSON-based syntax in the SQL statement. When a web application protected with BIG-IP ASM / Advanced WAF Security Policy that includes the JSON-based SQL injection Attack Signatures receives a similar request, the request is rejected. Here is a sample request done in a lab test. Here is the detected and blocked violation: SQL-INJ expressions like "OR 1=1" (Postgres JSON) (Parameter) attack signature detected the JSON-based SQL injection Notice in this exercise, the sample http request generated 3 occurrences of detected Attack Signatures. This means that other SQL injection techniques used can also be detected by the configured attack signatures and that there are multiple ways of detection. In this example, the other attack signatures were: SQL-INJ "UNION SELECT" (Parameter) SQL-INJ select ascii Conclusion Use supported BIG-IP Software versions that have not yet reached End of Software Development (EoSD) as these versions receive attack signature updates. From K5903: BIG-IP software support policy BIG-IP ASM attack signature files are updated for major releases until the release reaches its EoSD milestone. BIG-IP ASM attack signature files are updated for maintenance releases until the associated Long-Term Stability Release reaches its EoSD milestone. Keep your Attack Signatures updated to receive new attack signatures. Do take note of the Signature Staging behaviour. From K82512024: Managing BIG-IP ASM Live Updates (14.1.x and later) When attack signatures are updated, new signatures are placed in staging (non-blocking) while updated signatures are enforced according to the Updated Signature Enforcement setting. Unchanged attack signatures remain in the configured mode. Review the Attack Signature sets configured on your BIG-IP ASM / Advanced WAF Security Policy. New Attack Signatures are assigned to Attack Signature sets; thus, it is important that the intended sets are configured on your security policy. For example, the detected attack signature “SQL-INJ expressions like "OR 1=1" (Postgres JSON) (Parameter)” in the lab test is part of the SQL Injection and Low Accuracy Attack Signature Sets. Either attack signature sets need to be assigned to the security policy to have these JSON-based SQL injection signatures enabled and block matched requests. Depending on the contents of the HTTP request, multiple attack signatures may be matched – as seen on the violations generated for the sample request.5.7KViews3likes1CommentF5 Python SDK Overview
Introduction The purpose of this article is to give a brief overview of the F5 Python SDK. The F5 Python SDK provides a programmatic interface to BIG-IP and its modules. We will cover the basic concepts and definitions of F5 Networks BIG-IP iControl REST interface and how they relate to the SDK in its current state (LTM / core, etc.). We will also demonstrate some calls and then show the current state of AFM in the SDK. As the AFM is not yet developed, we will show one way to build a framework that makes taking the restful JSON endpoints and programmatically iterating to pull additional endpoints. Concepts The SDK is an Object model based SDK for the F5 Networks BIG-IP iControl REST interface.The structure of the SDK calls maps toiControl REST: http://192.168.1.1/mgmt/tm/ltm/pool/~Common~mypool/members/~Common~m1:80 |----|--|---|----|--------------|-------|-------------| |root|OC|OC |Coll| Resource | SC |SubColl Resrc| Organizing Collection: An organizing collection is a superset of other collections.These are not configurable; rather, they contain other submodules which either contain configurable objects (Collection) or are configurable objects (Resource).For example, the ltm or net module listing would be an organizing collection, whereas ltm/pool or net/vlan would be collections.To retrieve either type, you use the get_collection method as shown below, with abbreviated output. Example: f5.bigip.tmmaps to tmsh f5.bigip.tm.sys maps to ‘System’ f5.bigip.tm.ltm module maps to ‘Local Traffic’ Collection: A collection is similar to an organizing collection in that it is not a configurable object. Unlike an organizing collection, however, a collection only contains references to objects (or resources) of the same type. In the SDK, collection objects are usually plural, while Resource objects are singular. When the Resource object’s corresponding URI is already plural, we append the name of the collection with _s. Example: URI Collection Resource /mgmt/tm/net/tunnels/ tm.net.tunnels tm.net.tunnels.tunnel /mgmt/tm/ltm/pool/ tm.ltm.pools tm.ltm.pools.pool /mgmt/tm/ltm/pool/members/ tm.ltm.pool.members_s tm.ltm.pool.members_s.members Example: Use f5.bigip.resource.Collection.get_collection() to get a list of the objects in thef5.bigip.tm.ltm.pool collection. Resource: A resource is a fully configurable object for which the CURDLE methods are supported. Methods Method HTTP Command Action(s) create() POST creates a new resource on the device with its own URI update() PUT submits a new configuration to the device resource; sets the Resource attributes to the state reported by the device |modify| PATCH submits a new configuration to the device resource; sets only the attributes specified in modify method. This is different from update because update will change all the attributes, not only the ones that you specify. refresh() GET obtains the state of a device resource; sets the representing Python Resource Object; tracks device state via its attributes delete() DELETE removes the resource from the device, sets self.__dict__ to {'deleted': True} load() GET obtains the state of an existing resource on the device; sets the Resource attributes to match that state exists() GET checks for the existence of an object on the BIG-IP Example: Load af5.bigip.tm.ltm.node.Node Resource object.The output of thef5.bigip.tm.ltm.node.Node.raw (above) shows all of the available attributes. Subcollection: A subcollection is a Collection that’s attached to a higher-level Resource object. Subcollections are almost exactly the same as collections; the exception is that they can only be accessed via the resource they’re attached to (the ‘parent’ resource). Example: A pool resource has a members_s subcollection attached to it; you must create or load the ‘parent’ resource (pool) before you can access the subcollection (members_s). >>> from f5.bigip import ManagementRoot >>> mgmt = ManagementRoot('192.168.1.1', 'myuser', 'mypass') >>> pool =mgmt.tm.ltm.pools.pool.load(partition='Common', name='p1') >>> members = pool.members_s.get_collection() Subcollection Resource: A subcollection resource is essentially the same as a resource. As with collections and subcollections, the only difference between the two is that you must access the subcollection resource viathe subcollection attached to the main resource. Example To build on the subcollection example: pool is the resource, members_s is the subcollection, and members (the actual pool member) is the subcollection resource. >>> from f5.bigip import ManagementRoot >>> mgmt = ManagementRoot('192.168.1.1', 'myuser', 'mypass') >>> pool =mgmt.tm.ltm.pools.pool.load(partition='Common', name='p1') >>> member = pool.members_s.members.load(partition='Common', name='n1:80') REST URIs: You can directly infer REST URIs from the python expressions, and vice versa. Examples Expression: mgmt = ManagementRoot('<ip_address>', '<username>', '<password>') URI Returned: [https://%3cip_address%3e/mgmt/]https://<ip_address>/mgmt/ Expression: cm =mgmt.cm('<ip_address>', '<username>', '<password>') URI Returned: [https://%3cip_address%3e/mgmt/cm]https://<ip_address>/mgmt/cm Expression: tm =mgmt.tm('<ip_address>', '<username>', '<password>') URI Returned: [https://%3cip_address%3e/mgmt/tm]https://<ip_address>/mgmt/tm Expression: ltm =mgmt.tm.ltm('<ip_address>', '<username>', '<password>') URI Returned: [https://%3cip_address%3e/mgmt/tm/ltm/]https://<ip_address>/mgmt/tm/ltm/ Expression: pools1 =mgmt.tm.ltm.pools URI Returned: [https://%3cip_address%3e/mgmt/tm/ltm/pool]https://<ip_address>/mgmt/tm/ltm/pool Expression: pool_a = pools1.create(partition="Common", name="foo") URI Returned: [https://%3cip_address%3e/mgmt/tm/ltm/pool/~Common~foo]https://<ip_address>/mgmt/tm/ltm/pool/~Common~foo Test the SDK: Create a Virtual Environment Here are the steps to set up a Python virtual environment: Install Python. You can download the installer from the official website. Install pip. Python3 usually comes with pip preinstalled. However, if you get an error, you can install it using the following command: python get-pip.py . Install virtualenv. You can install it using the following command: pip install virtualenv . Create a virtual environment. You can create a virtual environment by specifying the target directory (absolute or relative to the current directory) which is to contain the virtual environment. The create method will either create the environment in the specified directory, or raise an appropriate exception. You can create a virtual environment using the following command: python3 -m venv f5venv . Activate the virtual environment by running source f5venv/bin/activate Install f5-sdk into the virtual environmet by running pip install f5-sdk Run the sample script provided below. AFM Code Sample from f5.bigip import ManagementRoot import requests import logging import json import sys class objectview(object): def __init__(self, d): self.__dict__ = d def bigip(): return { "bigip": "10.155.255.16", "rest_url": "https://admin:admin@", "rest_user": "admin", "rest_pwd": "admin", "partition": "Common" } def afm_rest_api(): policy_name = "restApiDemo" rule_name = "restApiDemoRule" return { "create_policy": "/mgmt/tm/security/firewall/policy", "add_rule": "/mgmt/tm/security/firewall/policy/~Common~"+policy_name+"/rules", "get_rules": "/mgmt/tm/security/firewall/policy/~Common~"+policy_name+"/rules", "change_rule": "/mgmt/tm/security/firewall/policy/~Common~restApiDemo/rules/"+rule_name, "delete_rule": "/mgmt/tm/security/firewall/policy/~Common~restApiDemo/rules/"+rule_name, "global_context": "/mgmt/tm/security/firewall/globalRules/", "global_rules": "/mgmt/tm/security/firewall/globalRules", "delete_policy": "/mgmt/tm/security/firewall/policy/~Common~"+policy_name } def create_firewall_pollcy(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) payload = '''{"name": "restApiDemo"}''' logger.info(bip.rest_url + bip.bigip + api.create_policy) resp = requests.post(bip.rest_url + bip.bigip + api.create_policy, headers={'accept': 'application/json','content-type': 'application/json'}, auth=( bip.rest_user, bip.rest_pwd), data=payload, verify=False) logger.info(resp.text) def add_rule(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) headers = {'accept': 'application/json','content-type': 'application/json'} payload = '''{"name":"restApiDemoRule", "action":"reject", "place-before":"first"}''' logger.info(bip.rest_url + bip.bigip + api.add_rule) resp = requests.post(bip.rest_url + bip.bigip + api.add_rule, headers=headers, auth=( bip.rest_user, bip.rest_pwd), data=payload, verify=False) logger.info(resp.text) def display_rules(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) headers = {'accept': 'application/json','content-type': 'application/json'} logger.info(bip.rest_url + bip.bigip + api.add_rule) resp = requests.get(bip.rest_url + bip.bigip + api.get_rules, headers=headers, auth=( bip.rest_user, bip.rest_pwd), verify=False) logger.info(resp.text) def change_rule(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) headers = {'accept': 'application/json','content-type': 'application/json'} payload = '''{"action":"accept"}''' logger.info(bip.rest_url + bip.bigip + api.change_rule) resp = requests.patch(bip.rest_url + bip.bigip + api.change_rule, headers=headers, auth=( bip.rest_user, bip.rest_pwd), data=payload, verify=False) logger.info(resp.text) def delete_rule(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) headers = {'accept': 'application/json','content-type': 'application/json'} logger.info(bip.rest_url + bip.bigip + api.delete_rule) resp = requests.delete(bip.rest_url + bip.bigip + api.change_rule, headers=headers, auth=( bip.rest_user, bip.rest_pwd), verify=False) logger.info(resp.text) def global_context(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) headers = {'accept': 'application/json','content-type': 'application/json'} payload = '''{"enforcedPolicy":"restApiDemo"}''' logger.info(bip.rest_url + bip.bigip + api.change_rule) resp = requests.patch(bip.rest_url + bip.bigip + api.global_context, headers=headers, auth=( bip.rest_user, bip.rest_pwd), data=payload, verify=False) logger.info(resp.text) def global_rules(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) headers = {'accept': 'application/json','content-type': 'application/json'} logger.info(bip.rest_url + bip.bigip + api.change_rule) resp = requests.get(bip.rest_url + bip.bigip + api.global_rules, headers=headers, auth=( bip.rest_user, bip.rest_pwd), verify=False) logger.info(resp.text) def delete_policy(): bip = objectview(bigip()) api = objectview(afm_rest_api()) mgmt = ManagementRoot(bip.bigip, bip.rest_user, bip.rest_pwd) headers = {'accept': 'application/json','content-type': 'application/json'} payload = '''{"enforcedPolicy":""}''' logger.info(bip.rest_url + bip.bigip + api.change_rule) resp = requests.patch(bip.rest_url + bip.bigip + api.global_rules, headers=headers, auth=( bip.rest_user, bip.rest_pwd), data=payload, verify=False) resp = requests.delete(bip.rest_url + bip.bigip + api.delete_policy, headers=headers, auth=( bip.rest_user, bip.rest_pwd), verify=False) logger.info(resp.text) if __name__ == "__main__": logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") handler = logging.StreamHandler(sys.stdout) handler.setFormatter(formatter) logger.addHandler(handler) command= " ".join( sys.argv[1:] ) if command == "": logger.info("""Examples:" python afm_rest_api.py "create_firewall_pollcy()" python afm_rest_api.py "add_rule()" python afm_rest_api.py "display_rules()" python afm_rest_api.py "change_rule()" python afm_rest_api.py "delete_rule()" python afm_rest_api.py "global_context()" python afm_rest_api.py "global_rules()" python afm_rest_api.py "delete_policy()""") else: logger.info(command) eval(command)4.6KViews6likes1CommentWhy We CVE
Why We CVE Background First, for those who may not already know, I should probably explain what those three letters, CVE, mean. Sure, they stand for “Common Vulnerabilities and Exposures”, but that does that mean? What is the purpose? Borrowed right from the CVE.org website: The mission of the CVE® Program is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities. There is one CVE Record for each vulnerability in the catalog. The vulnerabilities are discovered then assigned and published by organizations from around the world that have partnered with the CVE Program. Partners publish CVE Records to communicate consistent descriptions of vulnerabilities. Information technology and cybersecurity professionals use CVE Records to ensure they are discussing the same issue, and to coordinate their efforts to prioritize and address the vulnerabilities. To state it simply, the purpose of a CVE record is to provide a unique identifier for a specific issue. I’m sure many of those reading this have dealt with questions such as “Does that new vulnerability announced today affect us?” or “Do we need to worry about that TCP vulnerability?” To which the reaction is likely exasperation and a question along the lines of “Which one? Can you provide any more detail?” We certainly see a fair number of support tickets along these lines ourselves. It gets worse when trying to discuss something that isn’t brand new, but years old. Sure, you might be able to say “Heartbleed”, and many will know what you’re talking about. (And do you believe that was April 2014? That’s forever ago in infosec years.) But what about the thousands of vulnerabilities announced each year that don’t get cute names and make headlines? Remember that Samba vulnerability? You know, the one from around the same time? No, the other one, improper initialization or something? Fun, right? It is much easier to say CVE-2014-0178 and everyone knows exactly what is being discussed, or at least can immediately look it up. Heartbleed, BTW, was CVE-2014-0160. If you have the CVE ID you can look it up at CVE.org, NVD (National Vulnerability Database), and many other resources. All parties can immediately be on the same page with the same basic understanding of the fundamentals. That is, simply, the power of CVE. It saves immeasurable time and confusion. I’m not going to go into detail on how the CVE program works, that’s not the intent of this article – though perhaps I could do that in the future if there is interest. Leave a comment below. Like and subscribe. Hit the bell icon… Sorry, too much YouTube. All that’s important is to note that F5 is a CNA, or CVE Numbering Authority: An organization responsible for the regular assignment of CVE IDs to vulnerabilities, and for creating and publishing information about the Vulnerability in the associated CVE Record. Each CNA has a specific Scope of responsibility for vulnerability identification and publishing. Each CNA has a ‘Scope’ statement, which defines what the CNA is responsible for within the CVE program. This is F5’s statement: All F5 products and services, commercial and open source, which have not yet reached End of Technical Support (EoTS). All legacy acquisition products and brands including, but not limited to, NGINX, Shape Security, Volterra, and Threat Stack. F5 does not issue CVEs for products which are no longer supported. And F5’s disclosure policy is defined by K4602: Overview of the F5 security vulnerability response policy. F5, CVEs, and Disclosures While CVEs have been published sporadically for F5 products since at least 2002 (CVE-1999-1550 – yes, a 1999 ID but it was published in 2002 – that’s another topic), things really changed in 2016 after the creation of the F5 SIRT in late 2015. One of the first things the F5 SIRT did was to officially join the CVE program, making F5 a CNA, and to formalize F5’s approach to first-party security disclosures, including CVE assignment. This was all in place by late 2016 and the F5 SIRT began coordinating F5’s disclosures. I’ve been involved with that since very early on, and have been F5’s primary point of contact with the CVE program and related working groups (I participate in the AWG, QWG, and CNACWG) for a number of years now. Over time I became F5’s ‘vulnerability person’ and have been involved in pretty much every disclosure F5 has made for a number of years now. It’s my full-time role. The question has been asked, why? Why disclose at all? Why air ‘dirty laundry’? There is, I think, a natural reluctance to announce to the world when you make a mistake. You’d rather just quietly correct it and hope no one noticed, right? I’m sure we’ve all done that at some point in our lives. No harm, no foul. Except that doesn’t work with security. I’ve made the argument about ‘doing the right thing’ for our customers in various ways over the years, but eventually it distilled down to what has become something of a personal catchphrase: Our customers can’t make informed decisions about their networks if we don’t inform them. Networks have dozens, hundreds, thousands of devices from many different vendors. It is easy to say “Well, if everyone keeps up with the latest versions, they’ll always have the latest fixes.” But that’s trite, dismissive, and wholly unrealistic – in my not-so-humble opinion. Resources are finite and prioritizations must be made. Do I need to install this new update, or can I wait for the next one? If I need to install it, does it have to happen today, or can it wait for the next scheduled maintenance? We cannot, and should not, be making decisions for our customers and their networks. Customers and networks are unique, and all have different needs, attack surfaces, risk tolerance, regulatory requirements, etc. And so F5’s role is to provide the information necessary for them to conduct their own analysis and make their own decisions about the actions they need to take, or not. We must support our customers, and that means admitting when we make mistakes and have security issues that impact them. This is something I believe in strongly, even passionately, and it is what guides us. Our guiding philosophy since day one, as the F5 SIRT, has been to ‘do the right thing for our customers’, even if that may not show F5 in the best light or may sometimes make us unpopular with others. We’re there to advocate for improved security in our products, and for our customers, above all else. We never want to downplay anything, and our approach has always been to err on the side of caution. If an issue could theoretically be exploited, then it is considered a vulnerability. We don’t want to cause undue alarm, or Fear, Uncertainty, and Doubt (FUD), for anyone, but in security a false negative is worse than a false positive. It is better to take an action to address an issue that may not truly be a problem than to ignore an issue that is. All vendors have vulnerabilities, that’s inevitable with any complex product and codebase. Some vendors seem to never disclose any vulnerabilities, and I’m highly skeptical when I see that. I don’t care for the secretive approach, personally. Some vendors may disclose issues but choose not to participate in the CVE program. I think that’s unfortunate. While I’m all for disclosure, I hope those vendors come to see the value in the CVE program not only for their customers, but for themselves. It does bring with it some structure and rigor that may not otherwise be present in the processes. Not to mention all of the tooling designed to work with CVEs. I’ve been heartened to see the rapid growth in the CVE program the past few years, and especially the past year. There has been a steady influx of new CNAs to the program. The original structure of the program was fairly ‘vendor-centric’, but it has been updated to welcome open-source projects and there has been increasing participation from the FOSS community as well. The Future In 2022 F5 introduced a new way of handling disclosures, our Quarterly Security Notification (QSN) process, after an initial trial in late 2021. While not universal, the response has been overwhelmingly positive – you may not be able to please all the people, all the time, but it seems you can please a lot of them. The QSN was primarily designed to make disclosures more predictable and less disruptive to our customers. Consolidating disclosures and decoupling them from individual software releases has allowed us to radically change our processes, introducing additional levels of review and rigor. At the same time, independent of the QSN process, the F5 SIRT had also begun work on standardized language templates for our Security Advisories. As you might expect, there are teams of people who work on issues – engineers who work on the technical evaluation, content creators, technical writers, etc. With different people working on different issues, it was only natural that they’d find different ways to say the same thing. We might disclose similar DoS issues at the same time, only to have the language in each Security Advisory (SA) be different. This could create confusion, especially as sometimes people can read a little too much into things. “These are different, there must be some significance in that.” No, they’re different because different people wrote them is all. Still, confusion or uncertainty is not what you want with security documentation. We worked to create standardized templates so that similar issues will have similar language, no matter who works on the issue. I believe that these efforts have resulted in a higher quality of Security Advisory, and the feedback we’ve received seems to support that. I hope you agree. These efforts are ongoing. The templates are not carved in stone but are living documents. We listen to feedback and update the templates as needed. When we encounter an issue that doesn’t fit an existing template a new template is created. Over time we’ve introduced new features to the advisories, such as the Common Vulnerability Scoring System (CVSS) and, more recently, Common Weakness Enumeration (CWE). We continue to evaluate feedback and requests, and industry trends, for incorporation into future disclosures. We’re currently working on internal tooling to automate some of our processes, which should improve consistency and repeatability – while allowing us to expand the work we do. Frankly, I only scale so far, and the cloning vats just didn’t work out. Having more tooling will allow us to do more with our resources. Part of the plan is that the tooling will allow us to provide disclosures in multiple formats – but I don’t want to say anything more about that just yet as much work remains to be done. So why do we CVE? For you – our customers, security professionals, and the industry in general. We assign CVEs and disclose issues not only for the benefit of our customers, but to lead by example. The more vendors who embrace openness and disclose CVEs, the more the practice is normalized, and the better the security community is for it. There isn’t really any joy in being the bearer of bad news, other than the hope that it creates a better future. Postscript If you’re still reading this, thank you for sticking with me. Vulnerability management and disclosure is certainly not the sexy side of information security, but it is a critical component. If there is interest, I’d be happy to explore different aspects further, so let us know. Perhaps I can peel back the curtain a bit more in another article and provide a look at the vulnerability management processes we use internally. How the sausage, or security advisory, is made, as it were. Especially if it might be useful for others looking to build their own practice. But I like my job so I’ll have to get permission before I start disclosing internal information. We welcome all feedback, positive or negative, as it helps us do a better job for you. Thank you.3.8KViews13likes3CommentsSecurity Best Practices for BIG-IP & BIG-IQ systems
This isn’t going to be an exhaustive list of steps you should take to secure a BIG-IP environment, but some colleagues and I worked on this list a little while ago and I wanted to finally get it out there for everyone to consume. There’s a wealth of information outside of this on AskF5 with specific steps to take to configure specific pieces of functionality, and it’s hard to link to them all here because often they are version specific where functionality has changed or been enhanced across major releases, so I’ll leave looking up those steps as an exercise for the reader should you find steps here you want to undertake. You’ll occasionally see us refer to the “control-plane” and “data-plane” in F5 documentation; the “control-plane” encompasses all the ways you can manage a device or installation – the Web UI (TMUI), iControl REST, iControl SOAP, SSH etc., they all count but also big3d, bigd and other daemons relevant to the management of the system. The “data-plane” encompasses every construct that passes user traffic like Virtual Servers, NATs, SNATs and so on, basically everything other than the control-plane. From here on in, those are the constructs I’ll be referring to. Step 1: Minimize access to the control-plane This is good practice for any system, but especially those which might sit in a privileged position in your network such as the BIG-IP (or edge firewalls and so on). It’s essential to keep the control-plane off the internet (with few exceptions such as big3d communications between BIG-IP DNS and BIG-IP LTM devices which may often traverse the internet); ideally though, you want to restrict access to only authorized IT staff. It's also good practice to control access to any control-plane services (SSH, HTTP, SNMP etc.) so that traffic is only allowed to travel to and from hosts you expect. Wherever possible, it’s best to use a management DMZ to control access, but you should also think about how to restrict lateral movement within the DMZ using microsegmentation or on-device controls, and the on-device controls were significantly enhanced in 14.1 to provide a robust management interface firewall. Access to a management DMZ should be through a jump box or VPN with 2FA enabled. Jump boxes provide a dedicated environment which can be secured and provides meaningful protection against XSS and CSRF attacks since administrators will be using the jump box only to administer the device and perform general purpose browsing and other office related tasks on their own hosts. Even without this infrastructure, it is better to administer your BIG-IP using a local Virtual Machine or, at the very least, a dedicated browser, to offer protection against phishing delivered XSS & CSRF attacks. Of course, we understand that the network design changes required for a management DMZ won’t happen overnight, but the on-device management interface firewall can at least be implemented independently, as can mandating a more secure environment to perform administrative tasks within. Step 2: BIG-IP Management and Self IPs Firstly, make sure that all Self IPs are configured with “Lockdown None” to ensure that no control-plane services are exposed, unless you specifically need to expose a service such as big3d (port 4353) where you should be sure to expose only the ports you require. On your dedicated management VLAN and non-routable HA VLANs you can use “Allow Default”, though consider allowing only specific ports where possible. Out-of-band management over a dedicated interface or VLAN is strongly recommended. You can achieve this using either the dedicated management interface on hardware platforms, or a dedicated management VLAN on the production interfaces where preferred or where a dedicated management interface is unavailable (e.g., a single-NIC cloud deployment). Step 3: Hardening the BIG-IP Where possible you should consider storing secrets in a Hardware Security Module – onboard FIPS HSM or NetHSM offer an extremely high level of security for SSL Keys and for those not wishing to go to a hardware HSM system, the built-in SecureVault functionality makes recovery of SSL keys more difficult for any unauthorized user who might gain access to the BIG-IP’s control plane. For more information on SecureVault, F5 has a knowledge base article available: K73034260 Reduce your attack surface wherever possible by provisioning additional modules as you need them, rather than up-front; this may also help to reduce the number of Security Advisories that are applicable to your systems. Consider using AAA such as RADIUS, TACACS or LDAP for authentication to the BIG-IP control-plane rather than locally configured accounts, as this will immediately bring all accounts with access to the BIG-IP under the control of your pre-existing enterprise account security practices. Remember that, regardless of remote authentication, the root and admin passwords are still available for fallback local authentication, so be sure to configure strong passwords and ensure they are stored securely. If you are using BIG-IP 15.0.0 or later you can also use a remote APM system to manage the authentication for the control-plane and implement 2FA/MFA using the APM system: https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-local-traffic-manager-implementations/implementing-apm-system-authentication.html Step 4: Monitoring You should configure off-box syslog (ideally to a SIEM) so that you have a reliable, immutable, record of configuration changes, potential indicators of compromise, system issues etc., and configure alerts based on those. You can also consider using SNMP traps and polling to monitor system performance and load, and to watch for potential indicators of attack against the data-plane (such as denial of service attacks). Consider regularly uploading qkviews to iHealth, unless doing so is prohibited by your enterprise security policies, as the built in heuristics will warn you about potential device misconfigurations, security vulnerabilities impacting your specific version, hardware and/or configuration and any indicators of compromise found on your system. You can automate this step using BIG-IQ, which can also be used to automate taking regular configuration snapshots. Step 5: Maintaining You really want to be running a recent software release, ideally within the last 2 LTS branches as F5 continuously improves functionality to address new attacks, and to ensure you are consuming security fixes quickly. Some customers prefer to use engineering hotfixes to address known issues, and we would suggest looking to move back to a mainline branch as soon as the fixes you require are available there, because this will ensure you have the minimum time-to-patch when new defects or vulnerabilities are uncovered in the product. Speaking of vulnerabilities – make sure you are signed up to the F5 Security mailing list to get alerts for significant vulnerabilities; both when Quarterly Security Notifications (QSNs) occur and should high-impact third party vulnerabilities require out of band notifications. For more information on F5s QSN approach, and the dates of past and future QSNs, see K67091411 I mentioned this earlier under Monitoring, but ensure you are taking regular backups of your devices so that you have a known-good, uncompromised configuration to work with should the worst happen, and a device needs reimaging. As noted earlier, BIG-IQ can help automate this task although, as always, be cautious and ensure you test and validate your backup scripts to make sure that you are gathering valid backups and the script(s) are not accidentally erasing anything when rotating old backups out. Step 6: Recovery Compromise is relatively uncommon, and if you take the steps outlined above to secure your environment and adhere to security best practices, it is unlikely to happen to you. That said, before anything else, preparation is the key to success and given that recovery efforts often involve several departments within an organization, make sure you have worked through a documented recovery plan. At a minimum, we’d suggest covering: How you would isolate the compromised device (If a pair is compromised, should you keep a compromised box running and serving customers with potentially serious PCI/GDPR implications, does your application delivery design allow you to keep serving customers after the loss of a device pair, should you invoke Disaster Recovery?) When can you reintroduce devices into service – does your company policy require that devices are held for forensic analysis? If so, do you have spare devices you can use the maintain service for your customers? How you would reimage the devices from scratch and recover from backups How you would revoke and replace SSL keys on the device which may have been compromised What other secrets might need to be replaced (RADIUS, TACACS, SNMP?) This sounds onerous, but it’s so much easier to have this conversation before you’ve got to make critical service impacting decisions. And of course, the scope shouldn’t be limited to just your F5 estate! Summary As I said at the outset, this list is far from exhaustive and needs to be read in the context of whatever existing guidelines your organisation has for securing, monitoring and maintaining systems as well as any existing disaster recovery plans. It’s also worth saying that while the technical specifics will change as F5s product offerings evolve with BIG-IP Next or the NGINX suite of products, but the general principles will remain largely the same. There is a wealth of documentation on AskF5 around securing systems with specific, technical steps you can follow, additional resources and so on, and I’ll link to just a few of those here: K67091411: Guidance for Quarterly Security Notifications K9970: Subscribing to email notifications regarding F5 products K27404821: Using F5 iHealth to diagnose vulnerabilities K11438344: Considerations and guidance when you suspect a security compromise on a BIG-IP system K53108777: Hardening your F5 system K45321906: Harden your BIG-IQ system3.8KViews8likes1CommentHTTP Multipart and Security Implications
Overview HTTP multipart, specifically multipart/form-data , is a media type that allows the encoding of information as a series of parts in a single message. This was introcuded way back in 1998 asRFC 2388.It is commonly used for forms that are expressed in HTML and where the form values are sent via HTTP. However, it can also be used for forms that are presented using representations other than HTML (like spreadsheets, Portable Document Format, etc), and for transport using other means than HTTP. In breif, this is how it works: In forms, there are a series of fields to be supplied by the user who fills out the form. Each field has a name. Within a given form, the names are unique. multipart/form-data contains a series of parts. Each part is expected to contain a content-disposition header where the disposition type is form-data , and where the disposition contains an (additional) parameter of name , where the value of that parameter is the original field name in the form. Each part has an optional Content-Type , which defaults to text/plain . If the contents of a file are returned via filling out a form, then the file input is identified as the appropriate media type, if known, or application/octet-stream . If multiple files are to be returned as the result of a single form entry, they should be represented as a multipart/mixed part embedded within the multipart/form-data . Each part may be encoded and the content-transfer-encoding header supplied if the value of that part does not conform to the default encoding. Below is an example of an HTTP Multipart Request and Response sequence. In this example, the Content-Type header specifies that the body of the request is multipart/form-data and provides a boundary string that is used to separate the different parts of the message. The body of the request contains two parts: one named "text" with the value "Hello World", and one named "file" with the filename "example.txt" and the content "This is the content of the file example.txt.". POST /upload HTTP/1.1 Host: example.com Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW Content-Length: 343 ------WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="text" Hello World ------WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="file"; filename="example.txt" Content-Type: text/plain This is the content of the file example.txt. ------WebKitFormBoundary7MA4YWxkTrZu0gW-- Here's an example of a possible HTTP response to this request: HTTP/1.1 200 OK Date: Tue, 14 Jun 2023 12:00:00 GMT Content-Type: application/json Content-Length: 30 { "status": "upload successful" } In this example, the server responds with a status code of 200 (OK) and a JSON body indicating that the upload was successful. The Content-Type header in the response specifies that the body of the response is JSON. In relation to multipart, MIME types are used to specify the data type of each part in a multipart/form-data message. Each part in a multipart message can have a different MIME type. For example, one part could be plain text ( text/plain ), while another part could be a JPEG image ( image/jpeg ). This allows a single HTTP request to contain different types of data. When sending large binary data or files, using multipart can be more efficient than other methods. This is because the data doesn't need to be encoded and decoded as it would if you were using a method like JSON. Multipart is a well-established standard and is supported by virtually all modern web browsers and web servers. Security Implications As with any data input, multipart data can be a vector for attacks if not handled properly. For instance, an attacker might attempt to exploit the system by sending maliciously crafted multipart data. This could include things like sending overly large files, or files that contain malicious code. Therefore, it's important to validate and sanitize all incoming data, limit the size of incoming files, and handle all data in a secure manner to prevent potential security issues. Here's an example of a maliciously crafted HTTP multipart request. In this case, the attacker is attempting to perform a Path Traversal attack by providing a filename that includes directory traversal sequences. POST /upload HTTP/1.1 Host: example.com Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW Content-Length: 343 ------WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="file"; filename="../../etc/passwd" Content-Type: text/plain Malicious content here. ------WebKitFormBoundary7MA4YWxkTrZu0gW-- In this example, the attacker is trying to overwrite the /etc/passwd file, which is a critical system file on Unix-like operating systems. To mitigate such an attack, you can use an F5 iRule to inspect the filename in the Content-Disposition header of each part in a multipart request and reject the request if the filename contains a directory traversal sequence. Here's an example of what that iRule might look like: when HTTP_REQUEST { if { [HTTP::header exists "Content-Type"] } { set content_type [HTTP::header "Content-Type"] if { [string tolower $content_type] starts_with "multipart/form-data" } { HTTP::collect } } } when HTTP_REQUEST_DATA { set data [HTTP::payload] if { $data contains "filename=\"../" } { HTTP::respond 400 content "Invalid filename" } else { HTTP::release } } This iRule works by collecting the body of the request if the Content-Type header indicates that it is a multipart request. When the body of the request is available, it checks if the body contains the string filename="../ . If it does, it responds with a 400 Bad Request error. Otherwise, it allows the request to proceed. File Upload Security Modern web application are expected to provide end users the ability to upload files. Sending files over HTTP Multipart poses a big security risk and without proper mitigations in place it can lead to a server compromise or a data breach. Developers, usually, can implement a number of checks and validations to ensure no malicious file is uploaded the server. These include blacklisting types of files that have dangerous extensions, validating the MIME-type of an uploaded file, checking image header, restrictingexecution of scripts in an upload directory using.htaccess, and in some cases doing validation of the file on the client side.Keep in mind though, some of these mitigations can be bypassed by a skillful attacker since they control the headers on an HTTP Request. It is a good idea to combine application side mitigations with a WAF as part of layered defense strategy. F5 Advanced WAF offers a number of built in protections of multipart data as part of HTTP Protocol Compliance and ability to tightly control the file uploads. BIG-IP ASM HTTP protocol compliance https://my.f5.com/manage/s/article/K10280 Bad multipart/form-data request parsing When the content type of a request header contains the Multipart/form-data substring, the system checks whether each multipart request chunk contains a Content-Disposition header containing a name value and corresponding parameter key value. For example: name="parameter_key". If the Content-Disposition header does not contain the required parameters, a violation is issued. Note: Content-Disposition is not covered under HTTP StandardRFC 2616, but is instead covered separately underRFC 2183- Communicating Presentation Information in Internet Messages: The Content-Disposition Header Field. Bad multipart parameters parsing The system examines the requests to verify that the Content-Disposition header matches the format: name="param_key";\r\n. The system also checks that the following is true: * A boundary follows immediately after the request headers. * The parameter value matches the format: ‘name="param_key";\r\n. * A chunked body contains at least one CRLF. * A chunked body ends with CRLF. If one of these is false, a violation is issued. ASM File Upload Specific Mitigtions https://my.f5.com/manage/s/article/K01235989 https://my.f5.com/manage/s/article/K64356849 https://my.f5.com/manage/s/article/K90728313 https://my.f5.com/manage/s/article/K78925560 https://my.f5.com/manage/s/article/K013855583.8KViews2likes0Comments