Cookie Persistence Without The Port
Problem this snippet solves:
Summary: This rule was written in response to a requirement in which a client had connections to multiple virtual services, each with multiple ports. We wanted a light weight cookie based persistence which did not include the port, but would take the port form the virtual service destination port. An additional requirement was to persist all client service request to the same IP across all pools such that if a pool member was down on port A, upon noticing that, all persistence from that client was re-balanced to another node.
This rule illustrates how use tcl to miminc cookie insert persitence, but without the port number. It also show how to detect a failed member and then reselect a different one.
Code :
# # Persist for multiple web services on the same IP, but different ports. Assumption is that # the virtual server destination port will match the pool members destination ports. # If a node is down by monitor or is disabled, all persistence to the node will be discarded, # the connection will be re-load balanced from the default pool. # # This is just like insert cookie persistence without the port. This iRule gets applied to # all services. It can be applied to multiple BIG-IPs and will persist to the same node # IPs without connection mirroring, just like cookie insert persistence. # when RULE_INIT { # see AskF5 SOL6917 for cookie encoding details: # https://support.f5.com/kb/en-us/solutions/public/6000/900/sol6917.html set static::persist_cookie "nsel" set static::debug 1 } when HTTP_REQUEST { # Record the default pool for this virtual incase we need to re-load balance after specifical # call out the node to persist to set def_pool [LB::server pool] if { [HTTP::cookie exists $static::persist_cookie] } { # We found our IP based persistence cookie name 'nsel' which was set by the BIG-IP when # an appropriate response was issued from a server # Encode IP using a reversable string map. Simple decode. This method is completely stateless on the BIG-IP # and will work as long as all the web-services client all bring back the cookie. # set server_ip [string map {a 0 b 1 c 2 d 3 e 4 f 5 g 6 h 7 i 8 j 9 k .} [HTTP::cookie value $static::persist_cookie]] set server_ip [b64decode [HTTP::cookie value $static::persist_cookie]] if {$static::debug != 0}{ log local0. "Client dest port [TCP::local_port] persisted to $server_ip encoded IP [HTTP::cookie value $static::persist_cookie]" } # node [string tolower $server_ip] [TCP::local_port] pool $def_pool member [string tolower $server_ip] [TCP::local_port] HTTP::cookie remove $static::persist_cookie set need_persist 0 } else { set need_persist 1 } } when LB_SELECTED { # Record the node the load balancing picked set selected_node [LB::server addr] # If the node was picked by a deterministic issuing of the node command, for persistence, # then re-load balance from the default pool and state that we need cookie persistence reset # on the next proper response from a server in the pool that is up by monitor state. if {$static::debug != 0}{ log local0. "LB_SELECTED fired with $selected_node selected. Its state is [LB::status node [LB::server addr] ]." } if { [LB::status node [LB::server addr] ] ne "up" } { if {$static::debug != 0}{ log local0. "attempting to persiste to failed or disabled node, re-load balancing and persisting"} set need_persist 1 LB::reselect pool $def_pool } } when HTTP_RESPONSE { if { $need_persist } { # We did not set the node in the request from a cookie, so either we need to set the original cookie # or else we are in a failed state and need to reset it to this server which responded well. # set encoded_server_ip [string map {0 a 1 b 2 c 3 d 4 e 5 f 6 g 7 h 8 i 9 j . k} $selected_node ] set encoded_server_ip [b64encode $selected_node] if {$static::debug != 0}{ log local0. "Node $selected_node needs to added persistence with encoded string $encoded_server_ip" } HTTP::cookie insert name $static::persist_cookie value $encoded_server_ip } } when LB_FAILED { # We have a real-time failure of the server, so let's fix it by re-load balancing to the default # pool and adding persistence to the new node selected. if {$static::debug != 0}{ log local0. "server failed to respond or RST the connection after load balancing picked it."} set need_persist 1 LB::reselect pool $def_pool }
Published Mar 17, 2015
Version 1.0CodeCentral_194
Cirrus
Joined May 05, 2019
CodeCentral_194
Cirrus
Joined May 05, 2019
No CommentsBe the first to comment