Forum Discussion

mrege's avatar
mrege
Icon for Altocumulus rankAltocumulus
Sep 12, 2023

VIP-targeting-VIP solution using Standard and performance L4 VS

Is it possible to configure following

decrypted application traffic processed in performance L4 virtual server where in the CLient facing Standard virtual server will be performing SSL offloading function using irule redirect.

#irule on standard server for redirect
when CLIENTSSL_DATA {
virtual performance_l4_VS
}

The intention here is to offload the application load balancing to eVPA hardware acceleration.

Will this reduce the load on the CPU usage, assuming this will be processed in FPGA?

 

 

  • In that scenario since the standard virtual server is terminating TLS the traffic still all needs to be handled by the first tmm. The fastest path is to just let it do its job and forward the traffic.

    Forwarding it to a second virtual server starts another handshake (w/ TCP) between two tmm’s where the second makes a load balancing decision, so essentially you’re adding unnecessary overhead by forwarding just to have a load balancing decision made at the second virtual server. 

    This is a good reference article. https://my.f5.com/manage/s/article/K8082#l4

    The FPGA is in the dataplane on ingress and egress from the switch (iSeries for example) or is the network interface on rSeries. Therefore if the first tmm terminating TLS has to process the traffic, it's being released down to the FPGA for forwarding on egress already.

  • Hi mrege that's a good question. You can certainly use vip-to-vip to fan application traffic among different needs on backend virtuals, but once all that data is already in TMM, I'm not quite sure it would save CPU to push it back down to the hardware at that point, even if it could (which would need to be tested.) Also, you can certainly use an iRule, but a policy works as well and keeps logic in native objects:

    ltm policy vip-to-vip {
        controls { forwarding }
        last-modified 2023-09-12:09:44:17
        requires { http }
        rules {
            fwding-vip {
                actions {
                    0 {
                        forward
                        select
                        virtual /Common/testapp-vip
                    }
                }
            }
        }
        status published
        strategy first-match
    }
    • JRahm's avatar
      JRahm
      Icon for Admin rankAdmin

      i'm asking internally if anyone else has insights

      • Brandon_'s avatar
        Brandon_
        Icon for Employee rankEmployee

        In that scenario since the standard virtual server is terminating TLS the traffic still all needs to be handled by the first tmm. The fastest path is to just let it do its job and forward the traffic.

        Forwarding it to a second virtual server starts another handshake (w/ TCP) between two tmm’s where the second makes a load balancing decision, so essentially you’re adding unnecessary overhead by forwarding just to have a load balancing decision made at the second virtual server. 

        This is a good reference article. https://my.f5.com/manage/s/article/K8082#l4

        The FPGA is in the dataplane on ingress and egress from the switch (iSeries for example) or is the network interface on rSeries. Therefore if the first tmm terminating TLS has to process the traffic, it's being released down to the FPGA for forwarding on egress already.