We are in the beta test before rolling out the F5 LTM to everyone. Our Load balancing method is currently set to Predicitive (member) at the direction of the F5 contractor that assisted us with our install and setup. I have some concerns about our persistance setup. Using Predictive (member) as the load balancing makes it kind of difficult for me to tell if a connection is being sent to X server because he's the "best" according to the Predictive (member) calcuation or if a persistance record is saying "that user goes to X server". Is there some where to see what the BigIP is currently tracking each servers calculation is? Basically I want to be able to look at the calcuation and say "Ok, everyone is being sent to server 3 becuase he's currenlty the best performer" or "it looks like people should start going to server 2 pretty soon as the other servers performance seems to be going down". Right now the best I can do is guess.
Thanks for the reply. That shows me the persistance records, which isn't what I'm looking for.
I'm wanting to see the performance numbers the F5 is calculating for each server. That way I can tell management "more traffic should be going to server X because he has the best performace calcuation currently".
Right now if managment asks why all the traffic is going to server X the best I can do is guess and hint vaguely at F5 black magic. This is insufficent resoning to management. I really want a concreet way that I can show from a stastic on the F5 that says "server X has a current performance of <insert number or stat or whatever> vs server Y that has a current performance of <instert number or stat or whatever>, and server Z has a performance of <insert number or stat or whatever> and based off of those numbers the F5 will be sending a majority of traffic to server <insert best performing server based off the predictive stats here>"
Well that's a bummer. I think I may switch the load balancing for a few days from predictive member to round robin. That may make it more obvious if my load is getting balanced correctly or if persistance records are messing with me.
Correct, and from the information I'm seeing there, they are not balancing correctly. Which is why I wanted to see whatever metric the Predictive Member was using to do the balancing. I've switched it off of Predictive Member onto Round Robin to remove the magic metric variable and I'm seeing that the connections tend to still all go to the same server, so I know it's not balancing correctly. I've opened a ticket with support.
Yes, thank you. I'm aware of the different methods. I have suspicisions that our VPN solution, zscaler, is preventing the load balacing from working correctly because the BigIP sees all VPN traffic as coming from the same IPs. I've opened a ticket with support.
If an application isn't just HTTP/HTTPS, do you even need persistance profiles? In my testing with HTTP/HTTPS applications, the connections don't stay current. In that instance one needs persistance records to send you to the same server over your session. For this application I'm load balancing, from what I've been seeing in the Pool connections, once a connection to the application is made it seems to stay "current" the entire time the user has the appliction open. I'm wondering if just turning off the persistance records would "fix" the problem.
Universal or ssl_addr could and mostly will use the IP address from the zscaler instance.
We have seen issues inbound to a VIP, whereby the clients were behind a proxy. The VIP doesn't see the client IP, it see's the src.ip as the external IP from the proxy.