Forum Discussion

NiHo_202842's avatar
NiHo_202842
Icon for Cirrostratus rankCirrostratus
Aug 01, 2016

Pool load balancing to one node

Hi guys,

So we have an HTTP application behind our 11.5.1 BIGIPs, which in turn is behind an ISAPI filter that forwards http traffic to the Bigips. (temporary) What we saw is that 24 hours after putting our BIGIPs inbetween, all users were on one node and this was severely stressing it. We saw this in applications logs aswell as bigip pool statistics.

I had to put the node offline to force a rebalance of the visitors due to the cookie persistence. Now 3 days later and we had the same problem. Again, force node offline, tried round robin load balancing & removed the fallback persistence 'source_addr' to be sure. (Am I correct to say fallback persistence shouldn't be used when using HTTP profiles?)

So I have the following pool configuration;

ltm pool /ITSS/live-hwo-pool {
            load-balancing-mode least-connections-member
            members {
                server1.company.com:commplex-link {
                    address 2.170.0.11
                    session monitor-enabled
                    state up
                }
                server2.company.com:commplex-link {
                    address 2.170.0.12
                    session monitor-enabled
                    state up
                }
            }
            monitor tcp and /ITSS/fw-prd-mon 
            partition ITSS
        }

And virtual;

 ltm virtual /ITSS/live-hwo-vs {
            destination /ITSS/2.170.0.100:https
            ip-protocol tcp
            mask 255.255.255.255
            partition ITSS
            persist {
                cookie {
                    default yes
                }
            }
            policies {
                /ITSS/asm_auto_l7_policy__live-hwo-vs
            }
            pool /ITSS/live-hwo-pool
            profiles {
                /ITSS/live-hwo-ssl {
                    context clientside
                }
                http { }
                httpcompression { }
                no-reject-dos { }
                oneconnect { }
                tcp {
                    context clientside
                }
                tcp-lan-optimized {
                    context serverside
                }
                websecurity { }
            }
            rules {
                maintenance-unavailable-rule
                secure-cookie-rule
                /ITSS/url-rewrite
                /ITSS/show-error-page-rule
            }
            security-log-profiles {
                "Log illegal request and response"
            }
            source 0.0.0.0/0
            source-address-translation {
                type automap
            }
            vlans {
                EXT-WIN-11
            }
            vlans-enabled
            vs-index 507
        }

It hasn't happened again, but I would love some suggestions or feedback on why this is happening. Thanks!

  • You should check through /var/log/ltm to ensure that the other pool member was not down. Otherwise, it could be that the node was allocated statically by one of the irules. Are you certain that LTM is provisioned as well as ASM?

     

    I'd also check the incoming traffic - for instance of the filter is acting as a proxy with all HTTP traffic arriving on one TCP connection then the loadbalancing decision will be made for the TCP connection only once. Worth assigning a oneconnect profile as well because this causes an LB::detach which should cause loadbalancing across both pool members.

     

  • can you remove persistence if its not needed and use a HTTP test tool to load connections up to see if the LB algorithm is working properly?

     

    link text

     

  • Whats your oneconnect source mask set to ? Does the BigIP see connections only come from "one" Address ?

     

    If this is the case, the BigIP may be using the OneConnect feature to re-use each TCP connection to the back end server... just a thought.

     

    • NiHo_202842's avatar
      NiHo_202842
      Icon for Cirrostratus rankCirrostratus

      Hi Iain;

          Source Mask 
          0.0.0.0
          Maximum Size    
          10000
           connections
          Maximum Age 
          86400
           seconds
          Maximum Reuse   
          1000
          Idle Timeout Override   
      
  • Persistencce is needed because the application servers do not persist in database.

     

  • Hi NiHo,

     

    the fallback SRC_IP persistence profile should be removed. It will most likely cause the ISAPI servers (i assume the ISAPI are forwarding the requests in a full proxy mode) to use SRC_IP persistence, if the first request does not already contain a valid persistence cookie. In the end a single ISAP Server (single IP) would always become balanced to a single pool member (same persistence record).

     

    So just use "Round Robin" or "Least Connection" balancing (will be used for the first request) and just use a Cookie based persitence (will be used for subsequent requests from the same client).

     

    Note: If cookie persistence alone is not stable enough, then you may want to implement UIE persistence based on a custom Cookie AND X-Forwarded-For header information passed by your ISAPI application (as fallback).

     

    Cheers, Kai