Forum Discussion

mr_shaggy_17493's avatar
mr_shaggy_17493
Icon for Nimbostratus rankNimbostratus
Nov 22, 2016

Weblogic: Universal vs Source Address Persistence

Hi All,

 

I have a question about the comparison of using universal vs source addr persistence method for weblogic clustered server. The universal persistence configuration refer to this: https://devcentral.f5.com/codeshare/weblogic-jsessionid-persistence.

 

My question:

 

  1. What is the main different of this two methods? (Another reason than one method was using src addr to persist traffic, while the other using cookie)

     

  2. Is there any possibility when I used src addr methods, that f5 will send traffic to other server rather than stick the traffic to the primary server of the active session? (Another reason than if the destination server for persistence was down, or the persistence record has been timeout)

     

Thanks in advance..

 

5 Replies

  • With universal persistence, you can create a persistence record on anything you like, an example would be the jsessionid value in the cookie. But it could equally be anything the FT is able to read. This differs from cookie persistence as cookie persistence does not create a record in the persistence table, the information is stored in the cookie.

     

    Comparing this to source address affinity, this also creates a persistence record like universal persistence however it is limited to just the source address. This could cause problems if the sources are all NAT'd behind a firewall, this may cause all connections to persist on the same pool member.

     

    To answer your other question, if the pool member was down, connections would be load balanced to another server in the pool and a new persistence record is created

     

  • Well noted. I have a problem where f5 was suspected as the root cause of error log on weblogic server, with error code BEA-100094. This error code said that there is a possibility of load balancer forwarding traffic to the secondary server rather than the primary server.

     

    In f5, I was using source addr persistence method, and its already used about since 2010 without any issue. This configuration was used since the box using OS 10.2.4, till now already upgraded into 11.5.4.

     

    I have done a logging on the virtual server, and didn't found any issue with the persistence, where from the logging, one source address are keep sticky to one pool member. So I just want to make sure is there any possibility that would make f5 forwarding traffic to the other pool member though already configured to use src addr persistence.

     

    Well thanks for the answer..

     

  • Rather than using universal and matching the jsessionid cookie, I would suggest creating, and using, a cookie persistence profile on the virtual server. If you don't encrypt the cookie the BigIP sets it can also provide troubleshooting info as it relates to what server traffic is being load balanced to. An added benefit, in my opinion, is that using a cookie profile for persistence will put you in control of the cookie that persistence is based on.

     

  • Persistence only applies after the initial connection. In other words, the first time a client connects to the virtual server, their connection will be load balanced to an available pool member. That could be either the primary or the secondary server, depending on your other pool configuration settings. Upon connecting again, that same client would be directed to the same pool member they originally load balanced to, based on their IP address if using source address affinity persistence. But for a new client (meaning one whose IP address does not match an existing source address persistence record), load balancing is done first. If both the primary and the secondary are available, either could be chosen.

     

    If the goal is to use only the primary server except if it is down, then I suggest using Priority Group Activation on the pool. Assuming there are only two pool members - the primary and the secondary - set the minimum number of available members to 1, and give the primary member a higher priority group number than the secondary (perhaps 10 for the primary and 5 for the secondary). Make sure you apply an appropriate monitor to both pool members so that LTM can detect if the primary becomes unavailable. If it does, then it will activate the secondary pool member - but ONLY if the primary is marked down, either by a monitor or by an administrator. You could also still use source address affinity persistence so that, in the event you manually disabled the primary (for example for maintenance), clients would still be allowed to persist to the primary until their session completed, rather than be switched to the secondary immediately. If you force the primary pool member offline, then all new traffic will go to the secondary regardless of whether they were persisting to the primary or not.

     

  • Hi Mr.Shaggy,

     

    Regarding Question 1:

     

    The difference of the two persistence method is just the "input data" that is used to track the persistence session table information. Source_IP persistence will always use the SRC_IP of the connecting client to create a session table entry, where as universal persistence can use any client provided data as "input data" (this could be the SRC_IP too, but in most cases the the session cookie of your application will be used to track the client). The benefit of using the application cookie as "input data" is, that the client can freely roam between different ISPs without loosing its session and you can even load balance multiple client behind a single proxy server.

     

    Regarding Question 2:

     

    SRC_IP and also Universal Persistence will always honor the persistence records as long as the persistence records are not timed out, the persisted pool members are not offline or the "input data" hasn't changed somehow (e.g. different SRC_IP or different application cookie). There are also some edge-cases, which may cause the F5 to send the request to the wrong pool member, if you mix multiple persistence methods on a single clientside TCP connection pointing to the same pool member. The edge case will happen in the cause you don't use a OneConnect Profile on your Virtual Server. In this case the F5 will always balance just the initial HTTP-Request destined to a given pool. So if you change the persistence method of an already established end-to-end connection (e.g. using an iRule that applies different persistence methods based on the requested URI) this may have no effect if the outcome of the new persistence method will result in a selection of a different pool member (see SOL7964 for further information). Beside of more or less common edge-case previously noted, also the advanced persistence settings "Match Across Services", "Match Across Virtual Servers" and "Match Across Pools" may also cause the F5 to change the persisted pool members under certain conditions...

     

    Cheers, Kai