Forum Discussion
TLS version on serverside of connection
I have a HTTPS virtual server which needs both a client and server ssl profile. The application broke after upgrading from 10.2.4 -> 11.4.1. I took network traces and found that the LTM is using TLS1.2 on the serverside of the connection, but the Pool Member only understands TLS1.0.
I've been wading through the SSL ciphers documentation, and I'm still not quite understanding what controls the TLS version (1.0, 1.1, or 1.2) the LTM chooses to use. My serverssl profile is configured with the DEFAULT ciphers list, and I see all three TLS protocol versions listed. I've been messing around with the cipher strings, and I can rearrange them in different ways. But there's nothing I have found that clearly explains why the LTM would choose TLS1.2 over TLS1.0. The doc claims that the DEFAULT cipher string is ordered by speed, but the
tmm --serverciphers DEFAULT output doesn't give any indication of "speed". So I can only assume the list is ordered by speed, but if that was the case, I would expect a TLS1.2 cipher to be listed first, since I have confirmed on the wire that's what the LTM selects.
Can someone help me understand what controls the TLS version that the LTM selects in a server ssl profile?
1 Reply
- Ian_124377
Nimbostratus
From my understanding of the articles I have read, the sorting (@SPEED or @STRENGTH) are actually sorting the encryption algorithms, and not the TLS protocol versions. What is supposed to happen in the SSL negotiation:
1.) The client sends a ClientHello message specifying the highest TLS protocol version it supports and a list of suggested CipherSuites.
2.) The server responds with a ServerHello message, containing the chosen TLS protocol version and the CipherSuite from the choices offered by the client.
For some reason the 10.2.4 has the ability to use TLS 1.2, but chooses not use it. And 11.4.1 has the ability to use TLS 1.2, and chooses to use it. I have not found any F5 doc on ordering the TLS protocol versions to use. It seems you can either order the encyption algorithms (RC4,AES,DEC,etc.) -OR- exclude certain TLS protocol versions/encryption algorithms altogether.
I think one of the following may be true which would explain this behavior:
1.) This may be because the release of 10.2.3 (which first included TLS 1.2) was too new, or not a completed standard, and F5 opted to not default to use it as their highest supported TLS version in the ClientHello's.
2.) It is a bug in 10.2.4 that the highest supported TLS version is not selected.
One final thing I would mention. The pool member you reference should not fail, if you are sending a ClientHello, and you tell the pool member "Hey, I can use up to TLS 1.2", then the pool member choose TLS 1.0 if that is the highest supported version it can use. But, it sounds to me like your pool member is terminating the connection rather than selecting the lower TLS version. That sounds like a problem with the pool member to me.
Articles of interest
Note the 11.2.1 through 11.4.1 default NATIVE:!MD5:!EXPORT:!DES:!DHE:!EDH:@SPEED http://support.f5.com/kb/en-us/solutions/public/13000/100/sol13171.htmlTLSv1.2
Note the 10.2.x default !SSLv2:ALL:!DH:!ADH:!EDH:!MD5:!EXPORT:!DES:@SPEED http://support.f5.com/kb/en-us/solutions/public/7000/800/sol7815.html
http://en.wikipedia.org/wiki/Transport_Layer_SecurityTLS_handshake
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
