So I'm currently troubleshooting slow speeds through my APM SSL VPN.
I ran a bandwidth test for each scenario and got the following results:
Client Local Site (No VPN):
Down: 837,53 Mbit/s
Up: 840,39 Mbit/s
MS: 1,12
Client on remote site going through the same BIG-IP but through a forwarding VIP (No VPN)
Down: 81,64 MBit/s
Up: 76,24 Mbit/s
MS: 13,26
Client using F5 APM SSL VPN
Down: 20,01 MBit/s
Up: 25,15 Mbit/s
MS: 29,88
Determined through these tests, the ISP I'm connected to does not have any bandwidth limitation, the BIG-IP I'm sending the traffic through is not limited in bandwidth.
I accept that there are some performance loss due to the encrypted tunnel, but these numbers indicate that I have lost:
Down: 75% loss
Up: 68% loss
MS: 100% increase
I read some threads on DC regarding this and the solution for many has been DTLS. So I configured it and ran some new tests:
Down: 24,95 MBit/s
Up: 23,04 Mbit/s
MS: 29,88
So the results are pretty much the same. I disregarded the bandwidth test and tried a file download instead.
Using my local connection I got 2-3MB/sec but on the VPN (using both TLS 1.2 and DTLS) I got 300KB/sec so around 2,4 Mbit/sec.
The BIG-IP I'm using is a VE provisioned with 1Gig license and in that location I'm running a 100Mbit ISP line. Getting a 2,4 Mbit/sec download through the tunnel is really bad.
I tried to tweak the TCP profiles and play with the compression settings but I still get the same results. I'm running version TMOS 12.1.3.4.
This slowdown can be due to a web proxy configured as a man-in-the-middle for SSL communication.
I suggest you to verify the issuer of the SSL certificate sent to the client to confirm it. Or put a ssl client side profile in your first test with the VIP (with same cipher algorithm) .
I have an F5 SWG but I have verified that traffic is not passing through that solution, it is routed directly using a Wildcard Performance L4 VS. However, when using a different speedtest provider, my results are much better. So this could merely be the bandwdith tester that causes the low speeds.
I set up an iPerf server inside my network and ran an iPerf test bost using TLS1.2 and DTLS. Here are the results:
I still don't think that's good enough on iPerf. You're still losing 90% of your bandwidth.
I came here looking for recent known issues with speed on APM. I've had us on DTLS for a long time now, and things were looking pretty good early on, but lately things have slowed down. I'm going to try to reboot our BigIP... but.. I don't know.
Actually I'm not loosing 90% of the speed on iPerf, more like 40% if I'm using rough numbers seeing that I have a 100Mbit line that I'm passing the traffic through.
Interesting enough, when performing iPerf directly over the Internet I'm getting worse speeds. Check this out:
And when going directly I have actually less hops since I'm going straight from my Juniper firewall directly to the iPerf server.
I did some more thourough Speedtest, both from a local server in network out on the Internet and over my VPN to the Internet. I got the following results:
Local Server Towards - FiberbyApS Copenhagen
VPN Client DTLS Towards - FiberbyApS Copenhagen
VPN Speed Loss - Download: -21% Upload: -21%
Local Server Towards - FibiaPS Taastrup
VPN Client DTLS Towards - FibiaPS Taastrup
VPN Speed Loss Download: -18% Upload: -21%
Local Server Towards - TDC Group Copenhagen
VPN Client DTLS Towards - TDC Group Copenhagen
VPN Speed Loss Download: -36% Upload: -28%
In most cases I lost on average 20% in both upload and download. But with that you'll have to take into consideration that my traffic must first leave my office, pass through my equipment at home, out to the speedtest servers and back all the way in addition to adding encryption. So a 20% loss is not something I see as a shock. Perhaps if I were to tweak my settings I could get those numbers up a bit.
But there are many factors to take into consideration, the amount of hops to get to my home environment, packet loss on the way etc. Like for instance, how can one speed test result in a almost 40% loss while others give me only 18%.
Have you run similar tests in your environment? It would be interesting to see if someone is getting worse/better numbers.