Forum Discussion

Philip_Jonsson_'s avatar
Philip_Jonsson_
Icon for Altostratus rankAltostratus
Nov 08, 2018

Slow Speed through APM SSL VPN Even With DTLS

Hello everyone!

 

So I'm currently troubleshooting slow speeds through my APM SSL VPN.

 

I ran a bandwidth test for each scenario and got the following results:

 

Client Local Site (No VPN):

 

  • Down: 837,53 Mbit/s
  • Up: 840,39 Mbit/s
  • MS: 1,12

Client on remote site going through the same BIG-IP but through a forwarding VIP (No VPN)

 

  • Down: 81,64 MBit/s
  • Up: 76,24 Mbit/s
  • MS: 13,26

Client using F5 APM SSL VPN

 

  • Down: 20,01 MBit/s
  • Up: 25,15 Mbit/s
  • MS: 29,88

Determined through these tests, the ISP I'm connected to does not have any bandwidth limitation, the BIG-IP I'm sending the traffic through is not limited in bandwidth.

 

I accept that there are some performance loss due to the encrypted tunnel, but these numbers indicate that I have lost:

 

  • Down: 75% loss
  • Up: 68% loss
  • MS: 100% increase

I read some threads on DC regarding this and the solution for many has been DTLS. So I configured it and ran some new tests:

 

  • Down: 24,95 MBit/s
  • Up: 23,04 Mbit/s
  • MS: 29,88

So the results are pretty much the same. I disregarded the bandwidth test and tried a file download instead.

 

Using my local connection I got 2-3MB/sec but on the VPN (using both TLS 1.2 and DTLS) I got 300KB/sec so around 2,4 Mbit/sec.

 

The BIG-IP I'm using is a VE provisioned with 1Gig license and in that location I'm running a 100Mbit ISP line. Getting a 2,4 Mbit/sec download through the tunnel is really bad.

 

I tried to tweak the TCP profiles and play with the compression settings but I still get the same results. I'm running version TMOS 12.1.3.4.

 

Do you have any suggestions?

 

6 Replies

  • Hi Philip,

     

    This slowdown can be due to a web proxy configured as a man-in-the-middle for SSL communication.

     

    I suggest you to verify the issuer of the SSL certificate sent to the client to confirm it. Or put a ssl client side profile in your first test with the VIP (with same cipher algorithm) .

     

  • Hey Nicolas

    I have an F5 SWG but I have verified that traffic is not passing through that solution, it is routed directly using a Wildcard Performance L4 VS. However, when using a different speedtest provider, my results are much better. So this could merely be the bandwdith tester that causes the low speeds.

    I set up an iPerf server inside my network and ran an iPerf test bost using TLS1.2 and DTLS. Here are the results:

    TLS 1.2
    iperf3.exe -c 10.10.15.10 -p 2222
    Connecting to host 10.10.15.10, port 2222
    [  4] local 10.10.10.248 port 31409 connected to 10.10.15.10 port 2222
    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-1.00   sec  3.38 MBytes  28.3 Mbits/sec
    [  4]   1.00-2.00   sec  3.75 MBytes  31.5 Mbits/sec
    [  4]   2.00-3.00   sec  3.75 MBytes  31.5 Mbits/sec
    [  4]   3.00-4.00   sec  3.75 MBytes  31.4 Mbits/sec
    [  4]   4.00-5.00   sec  3.62 MBytes  30.4 Mbits/sec
    [  4]   5.00-6.00   sec  3.75 MBytes  31.4 Mbits/sec
    [  4]   6.00-7.00   sec  3.75 MBytes  31.4 Mbits/sec
    [  4]   7.00-8.00   sec  3.62 MBytes  30.4 Mbits/sec
    [  4]   8.00-9.00   sec  3.75 MBytes  31.4 Mbits/sec
    [  4]   9.00-10.00  sec  3.75 MBytes  31.5 Mbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-10.00  sec  36.9 MBytes  30.9 Mbits/sec                  sender
    [  4]   0.00-10.00  sec  36.8 MBytes  30.9 Mbits/sec                  receiver
    
    iperf Done.
    
    DTLS
    iperf3.exe -c 10.10.15.10 -p 2222
    Connecting to host 10.10.15.10, port 2222
    [  4] local 10.10.10.247 port 31093 connected to 10.10.15.10 port 2222
    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-1.00   sec  6.38 MBytes  53.4 Mbits/sec
    [  4]   1.00-2.00   sec  8.12 MBytes  68.2 Mbits/sec
    [  4]   2.00-3.00   sec  7.62 MBytes  63.9 Mbits/sec
    [  4]   3.00-4.00   sec  7.25 MBytes  60.8 Mbits/sec
    [  4]   4.00-5.00   sec  7.88 MBytes  66.1 Mbits/sec
    [  4]   5.00-6.00   sec  7.38 MBytes  61.8 Mbits/sec
    [  4]   6.00-7.00   sec  7.75 MBytes  65.0 Mbits/sec
    [  4]   7.00-8.00   sec  7.88 MBytes  66.1 Mbits/sec
    [  4]   8.00-9.00   sec  7.25 MBytes  60.8 Mbits/sec
    [  4]   9.00-10.00  sec  7.50 MBytes  62.9 Mbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-10.00  sec  75.0 MBytes  62.9 Mbits/sec                  sender
    [  4]   0.00-10.00  sec  74.9 MBytes  62.8 Mbits/sec                  receiver
    
    iperf Done.   
    

    The results are significantly better with DTLS. So I guess it is in fact working as it should and we just need to make sure we have a proper test.

  • I still don't think that's good enough on iPerf. You're still losing 90% of your bandwidth.

     

    I came here looking for recent known issues with speed on APM. I've had us on DTLS for a long time now, and things were looking pretty good early on, but lately things have slowed down. I'm going to try to reboot our BigIP... but.. I don't know.

     

    • Philip_Jonsson's avatar
      Philip_Jonsson
      Icon for MVP rankMVP

      Hey DBornack

      Actually I'm not loosing 90% of the speed on iPerf, more like 40% if I'm using rough numbers seeing that I have a 100Mbit line that I'm passing the traffic through.

      Interesting enough, when performing iPerf directly over the Internet I'm getting worse speeds. Check this out:

      iPerf results over Internet
         iperf3.exe -c  -p 2222
          Connecting to host port 2222
          [  4] local 10.100.6.27 port 1963 connected to port 2222
          [ ID] Interval           Transfer     Bandwidth
          [  4]   0.00-1.00   sec  1.62 MBytes  13.6 Mbits/sec
          [  4]   1.00-2.00   sec  6.25 MBytes  52.3 Mbits/sec
          [  4]   2.00-3.00   sec  7.12 MBytes  59.9 Mbits/sec
          [  4]   3.00-4.00   sec  7.38 MBytes  61.8 Mbits/sec
          [  4]   4.00-5.00   sec  5.25 MBytes  44.1 Mbits/sec
          [  4]   5.00-6.00   sec  6.00 MBytes  50.3 Mbits/sec
          [  4]   6.00-7.00   sec  5.12 MBytes  43.1 Mbits/sec
          [  4]   7.00-8.00   sec  7.50 MBytes  62.9 Mbits/sec
          [  4]   8.00-9.00   sec  8.12 MBytes  68.2 Mbits/sec
          [  4]   9.00-10.00  sec  6.00 MBytes  50.2 Mbits/sec
          - - - - - - - - - - - - - - - - - - - - - - - - -
          [ ID] Interval           Transfer     Bandwidth
          [  4]   0.00-10.00  sec  60.4 MBytes  50.6 Mbits/sec                  sender
          [  4]   0.00-10.00  sec  60.3 MBytes  50.6 Mbits/sec                  receiver
      
          iperf Done.
      
      Over VPN
      iperf3.exe -c 10.10.15.10 -p 2222
      Connecting to host 10.10.15.10, port 2222
      [  4] local 10.10.10.212 port 1778 connected to 10.10.15.10 port 2222
      [ ID] Interval           Transfer     Bandwidth
      [  4]   0.00-1.00   sec  6.12 MBytes  51.4 Mbits/sec
      [  4]   1.00-2.00   sec  8.00 MBytes  67.1 Mbits/sec
      [  4]   2.00-3.00   sec  7.50 MBytes  62.8 Mbits/sec
      [  4]   3.00-4.00   sec  7.75 MBytes  65.1 Mbits/sec
      [  4]   4.00-5.00   sec  8.12 MBytes  68.2 Mbits/sec
      [  4]   5.00-6.00   sec  7.50 MBytes  62.9 Mbits/sec
      [  4]   6.00-7.00   sec  8.25 MBytes  69.2 Mbits/sec
      [  4]   7.00-8.00   sec  7.38 MBytes  61.9 Mbits/sec
      [  4]   8.00-9.00   sec  8.38 MBytes  70.3 Mbits/sec
      [  4]   9.00-10.00  sec  7.50 MBytes  62.9 Mbits/sec
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID] Interval           Transfer     Bandwidth
      [  4]   0.00-10.00  sec  76.5 MBytes  64.2 Mbits/sec                  sender
      [  4]   0.00-10.00  sec  76.5 MBytes  64.2 Mbits/sec                  receiver
      
      iperf Done.
      

      And when going directly I have actually less hops since I'm going straight from my Juniper firewall directly to the iPerf server.

      I did some more thourough Speedtest, both from a local server in network out on the Internet and over my VPN to the Internet. I got the following results:

      Local Server Towards - FiberbyApS Copenhagen

      VPN Client DTLS Towards - FiberbyApS Copenhagen

      VPN Speed Loss - Download: -21% Upload: -21%

      Local Server Towards - FibiaPS Taastrup

      VPN Client DTLS Towards - FibiaPS Taastrup

      VPN Speed Loss Download: -18% Upload: -21%

      Local Server Towards - TDC Group Copenhagen

      VPN Client DTLS Towards - TDC Group Copenhagen

      VPN Speed Loss Download: -36% Upload: -28%

      In most cases I lost on average 20% in both upload and download. But with that you'll have to take into consideration that my traffic must first leave my office, pass through my equipment at home, out to the speedtest servers and back all the way in addition to adding encryption. So a 20% loss is not something I see as a shock. Perhaps if I were to tweak my settings I could get those numbers up a bit.

      But there are many factors to take into consideration, the amount of hops to get to my home environment, packet loss on the way etc. Like for instance, how can one speed test result in a almost 40% loss while others give me only 18%.

      Have you run similar tests in your environment? It would be interesting to see if someone is getting worse/better numbers.

  • Hi Philip,

     

    Did you change default virtual interface speed to 1 000 000?

     

    Default value is 100Mbps

     

  • Did you ever resolve this? We have very similar issues....