on 16-Apr-2020 08:20
In one of my discussions with a couple of community members, we discussed the use of TLS caching as a powerful way to improve performance, TLS-wise. With the increase of Internet traffic globally due to COVID-19, this is an important concept to grasp.
This is the topology I used to perform the tests:
Both client and server SSL profiles have cache-size set to 1:
If you just want to know how TLS Caching works, move straight to Lab Test Results section.
First off we should learn that there is a global db variable for tmm's cache size with min and max values:
TMM is BIG-IP's forwarding plane daemon.
The other things we should be aware of:
Note: To learn about the difference between Session ID and Session Ticket, please refer to this article here.
In the beginning cache is clear:
I then send first request making sure I save session information locally:
We then see first cache entry populated with 1 lookup but no hits yet since this is new cache entry:
Next, I send 2nd request reusing the same SSL session using OpenSSL:
And fair enough we see our first cache hit (note that lookup also increases):
I'm now going to create a new TLS session with no session reuse:
And now we see first overflow because it is a cache miss and as the max cache size is set to 1.
The new cache entry overwrites oldest cache entry:
Now let's move on to Server SSL profile.
Sent first request:
The usual entry added to cache and 1 lookup:
Sent 2nd request intentionally using curl again (not reusing previous TLS session information):
But we still see a cache hit!
The reason is because on Server SSL side BIG-IP no longer uses Session ID/Session Ticket information to to retrieve cached entry. Remember, on Server SSL side, BIG-IP is the client!
Instead, BIG-IP uses destination server and port to retrieve previously stored session ID/Session ticket and send it over to back-end server in Client Hello message in order to resume session.
That's why we had a cache hit here!
For this particular overflow test, I've added a 2nd pool member to our pool (172.16.199.32) because Server SSL profile uses destination IP and port to retrieve cached TLS session, which wouldn't change if we use the same pool member, would it?
Now I've issued a 3rd request (load balanced to new pool member this time):
And now we see overflow because pool member is a different one:
This should give enough insight to understand how TLS caching works and to troubleshoot any related issues.