Showing results for 
Search instead for 
Did you mean: 
Community Manager
Community Manager

When using SPDY/HTTP2 profile, TCL variables set before the HTTP_REQUEST event are not carried over to further events. This is by design as the current iRule/TCL implementation is not capable of handling multiple concurrent events, which may be they case with SPDY/HTTP2 multiplexing. This is easily reproducible with this simple iRule:

  set default_pool [LB::server pool]
  log local0. "HTTP_REQUEST EVENT Client [IP::client_addr]:[TCP::client_port] - $default_pool -" 
#  - can't read "default_pool": no such variable while executing "log local0. "HTTP_REQUEST EVENT -- $default_pool --"

Storing values in a table works perfectly well and can be used to work around the issue as shown in the iRule below.

  table set [IP::client_addr]:[TCP::client_port] [LB::server pool]
  set default_pool [table lookup [IP::client_addr]:[TCP::client_port]]
  log local0. "POOL: |$default_pool|"

This information is also available in the wiki on the HTTP2 namespace command list page.


Thks Jason.



Hi Jason,


how about using static::variables to store those information? Should be slightly more friendly to CMP, isn't it?


when CLIENT_ACCEPTED { set static::(DefPool_[virtual]) [LB::server pool] } when HTTP_REQUEST { log local0. "POOL: |$static::(DefPool_[virtual])|" }

Can you hand out some additional information how HTTP/2 request are processed by the TCP runtime environment? Are those HTTP/2 requests handled as a child stackframe ([info level] = 3) of the TCP connection ([info level] = 2)? If a child stackframe is used, can [uplevel] / [upvar] be used to access the TCP connection variables?


Cheers, Kai


Community Manager
Community Manager

that was the workaround provided by support, but other workarounds might be possible and/or more efficient. I'll investigate and let you know.



Hi Jason,


Table access in CLIENT_CLOSED event can cause iRule event to suspend. It is better to use some static variables to avoid CLIENT_CLOSED getting suspended. Moreover we may have to use associative arrays / Hash maps so that any variable we are using is tied to the client connection.

I recommend against use of table simply because it suspends iRule processing (K12962) potentially impacting performance. Especially on an F5 with heavy traffic. There are occasions where table is needed, but this is not one of those cases.


static::variable (array)

What Kai posted above should accomplish the task just fine. However, it is important to note this method slightly changes behavior causing a potential problem. The scenario is unlikely, but I believe deserves some attention. The use of table may also result in the same problem. I have not tested the historical method of setting the variable in CLIENT_ACCEPTED.


To prevent this post from spanning pages of info, I won't go into the gory details. I'll just list steps demonstrating the problem.


Assuming there are 2 pools - pool_1/pool_2 Browser is a client connecting with HTTPv2.


  1. Configure VS with iRule performing default pool selection via static variable with pool_1.
  2. Make request from a browser to the VS to populate variable.
  3. Change the default pool on the VS to pool_2.
  4. Delete pool_1.
  5. Refresh the page from the browser. This results in a TCL error and a browser error (Chrome reported ERR_SPDY_PROTOCOL_ERROR).

The error persisted regardless of what I did until one of the following occurred:


  1. A new connection made on the same TMM instance to update the variable.
  2. Manually delete the connection on the F5 forcing my browser to open a new one.
  3. Wait for the connection to time-out.
Other Options OneConnect

If the purpose is to determine the default pool, adding a OneConnect profile to the VS can accomplish this. With a OneConnect profile, the load balancing decision occurs on each request. This results in the behavior of LB::server pool returning the default pool as long as it is run prior to any selection command such as pool.


This behavior can be demonstrated in a rule like this (assuming the VS has a oneconnect profile):


when HTTP_REQUEST { log local0. "should always be the default pool - [LB::server pool]" switch -glob -- [HTTP::path] { /images/* { pool image_caching } /app/* { pool main_app } *.js { pool javascript_storage } } }

This is true with the exception of a couple version where F5 "changed the behavior" : K00725997


Conditionally set variable

Another way to accomplish the task is to conditionally set the variable in HTTP_REQUEST. This has the advantage of looking and behaving virtually identical to setting in CLIENT_ACCEPTED.


when HTTP_REQUEST { we don't care about non-HTTPv2 requests if {![HTTP2::active]} { return } if {![info exists default_pool]} { set default_pool [LB::server pool] } log local0. "This should always match the default pool on the VS - ${default_pool}" } How it works (I think)

I haven't found official F5 information detailing variable scope and HTTP2, but I can explain what I've seen.


Each HTTP2 stream being processed concurrently receives its own scope for load balancing and iRule variables. Almost as if each concurrent session is its own little connection. This can be demonstrated/verified with an iRule like this:


when HTTP_REQUEST { we don't care about non-HTTPv2 requests if {![HTTP2::active]} { return } if {![info exists default_pool]} { set default_pool [LB::server pool] log local0. "default_pool variable set for conncurent stream - [HTTP2::concurrency], set to - ${default_pool}" } else { log local0. "default_pool variable exists on conncurent stream - [HTTP2::concurrency], set to - ${default_pool}" } }

As you refresh the page, logging indicates each concurrent session sets the variable only once.


If each conncurrent stream receives its own scope, it's logical to conclude setting a variable in each consumes more memory. However, when using HTTPv2, each client should only use a single TCP connection in lieu of concurrent TCP connections, so the trade off is setting a variable in each HTTP2 concurrent session as apposed to setting the variable once across multiple TCP connections.



Hi Jeremy,


I guess the deletion of a pool is a somewhat rare edge case... So i would say that both the [table] and $static::array(key) approaches are somewhat stable solutions... 😉


The $static::array($key_name) approach will be very effective if global or per VS configuration data with limited size needs to become stored (as required in Jasons example). But if the possible $key_name values are not limited, then it may be wise to implement some addtioanl limiters / garbage collections to control the memory consumption and/or to remove old entries.


In addition the [table] command usage of Jasons example can be additionally opimized to avoid cross TMM communication. The trick is to use a different but well selected [table -subtable] instance for the individual TMM cores, where each -subtable will always become hash-routed to the local TMM core.


when RULE_INIT { The cmp_hash_array() values are optimized for my 2 TMM core develoment unit array set static::cmp_hash_array { 0 "table_hash_1" 1 "table_hash_2" } } when CLIENT_ACCEPTED { table set -subtable $static::cmp_hash_array([TMM::cmp_unit]) "[IP::client_addr]:[TCP::client_port]" [LB::server pool] } when HTTP_REQUEST { set default_pool [table lookup -subtable $static::cmp_hash_array([TMM::cmp_unit]) "[IP::client_addr]:[TCP::client_port]"] log local0. "POOL: |$default_pool|" }

Cheers, Kai




You are correct, the scenario would be a rare case. Generally it's not something I would even be concerned about myself, but because it is possible, it's good to know it could happen for that rare case.


I didn't think about coercing the F5 to assign a specific owning TMM for the subtable. I would be a little surprised if the TMM responsible for creating the subtable is always the TMM that owns the subtable. Is there a feature of subtables that allows the name of table_hash_1 to using TMM 0 and table_hash_2 to use TMM 1? I'm not even sure how to test the owning TMM for a table entry, at least not directly.


I would still shy away from table, static variables should accomplish the task. And if the purpose is to select a default pool like in this example iRule, I would still prefer OneConnect over attempting to store the default pool for later use.


when CLIENT_ACCEPTED { set static::default_pool([virtual name]) [LB::server pool] } when HTTP_REQUEST { switch -glob -- [HTTP::uri] { /images/ { pool image_pool } default { pool $static::default_pool([virtual name]) } } }

If it's desired to keep them cleaned up, contents could checked periodically with something like this.


when RULE_INIT { if {[array exists static::default_pool]} { foreach VS [array get static::default_pool] { log local0. "[lindex $VS 0] [lindex $VS 1]" } } else { log local0. "default_pool array does not yet exist" } }

Whenever loaded or updated, that rule would log each virtual and associated pool once per-TMM. One could even set an alertd event to trigger a script for these messages, though that may be a bit overkill. In my opinion it's not worth it, only 1 copy per-TMM of the array, even with hundreds of virtual servers it's not a significant memory concern.


New Feature Maybe?

I haven't written up an RFE for it yet, but if we could query the pool on the VS directly, we wouldn't have to rely on table or static variables to get the default pool. Maybe add a VS namespace (or whatever it's called) exposing some key config on the VS, similar to the PROFILE commands.













Hi Jeremy,


A given -subtable (and its containing keys) will always stick to the same TMM core. This approach is required to enable [table keys] to query just a single TMM core. Without this behavior, the a [table keys] command would need to query every single TMM core to get all the key names/counts. This would be a performance nightmare...


Unfortunately you can’t control/manipulate the hash-routing of an individual -subtable name. But you can observe the existing hash-routing behavior by clocking the elapsed [clock clicks] for a single [table -subtable] execution. A local TMM table query will in this case require significantly less clicks and therefor allows a very easy differentiation.


To simplify the discovery process and management you can also use a data-plane code block to discover a -subtable value that gets hash-routed to the local TMM instance. See the sample below how this could be implemented based on Jason initial iRule functionality.


when RULE_INIT { set static::local_tmm_maxclicks 15 } when CLIENT_ACCEPTED { if { [catch { table set -subtable $static::local_tmm_subtable "[IP::client_addr]:[TCP::client_port]" [LB::server pool] }] } then { foreach char [split 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ""] { set timestamp [clock clicks] table lookup -subtable "table_hash_$char" dummy if { [expr { [clock clicks] - $timestamp }] < $static::local_tmm_maxclicks } then { log local0.debug "The subtable \"table_hash_$char\" has taken \"[expr { [clock clicks] - $timestamp }]\" clicks. The table is hash-routed to the local TMM core." set static::local_tmm_subtable "table_hash_$char" table set -subtable $static::local_tmm_subtable "[IP::client_addr]:[TCP::client_port]" [LB::server pool] break } else { log local0.debug "The subtable \"table_hash_$char\" has taken \"[expr { [clock clicks] - $timestamp }]\" clicks. The table is hash-routed to a different TMM core." } } } } when HTTP_REQUEST { set default_pool [table lookup -subtable $static::local_tmm_subtable [IP::client_addr]:[TCP::client_port]] log local0. "POOL: |$default_pool|" }

Note: Your VS:: namespace is a pretty cool idea! I would second that feature request! ;-)*


Note: Well, OneConnect has its own pros but also serious cons. And to be honest, I very much dislike the way OneConnects reverts the current pool selection after every single request... 😞


Cheers, Kai




I didn't realize you meant for there to be additional iRule logic to determine the owning TMM. It does make the table -subtable a more palatable option if lookups were guaranteed to be local.


There are some cons to OneConnect, but reverting back to the default pool for every request is something I prefer. I have submitted RFEs to address some things I don't like and have several other features I'd like to see added, but haven't taken the time to submit those yet. Overall, in my experience OneConnect has helped far more often than not.


Maybe it's the original topic that bugs me, or I'm just in a feature request mood because this discussion sparked a few other features ideas:


1. TMM::hash Description

Command used to hash a value to calculate the owning TMM.


Syntax - TMM::hash

TMM::hash table


TMM::hash table -subtable


TMM::hash persist uie


TMM::hash persist source_addr




That would greatly simplify the process of determining the owning TMM instance. It would likely incur little CPU overhead and probably not too complicated to develop. It wouldn't be the most used command, but I would have found it useful on a few occassions. I could see it coming in handy for testing and troubleshooting.


2. New table command

What about an additional table command that is TMM specific, maybe table_local or preferably something shorter and more catchy. It would be similar to static variables, but with all the features of table which would be ideal for storing the default pool. It may be useful in other situations too.


3. New Variable Scope

One of your earlier comments you mentioned possibly using something like uplevel or upvar to get at variables at the connection level. I tried every way I could, but unless there is some sort of secret command, I don't believe it is possible. As I mentioned, it's like each concurrent HTTP2 session is its own connection. I would propose adding an additional variable scope, similar to static, but specific to connections. Something like conn, client or session.


It would be used something like this.


when CLIENT_ACCEPTED { set conn::default_pool [LB::server pool] } when HTTP_REQUEST { if {[HTTP2::active]} { log local0. "default pool is ${conn::default_pool}" } }

I did do some testing a while back to get a little understanding of the variable scope in HTTP2. I did something like this:


when CLIENT_ACCEPTED { set CA "client accepted variable" } when HTTP_REQUEST { set REQ "HTTP request variable" if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } } when SERVER_CONNECTED { set SC "server connected variable" if {[catch {log local0. "HTTP request variable $REQ"}] != 0} { log local0. "outside the HTTP Request scope" } if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } } when HTTP_RESPONSE { if {[catch {log local0. "HTTP request variable $REQ"}] != 0} { log local0. "outside the HTTP Request scope" } if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } if {[catch {log local0. "server connected variable $SC"}] != 0} { log local0. "outside the Server Connected scope" } } when CLIENT_CLOSED { if {[catch {log local0. "HTTP request variable $REQ"}] != 0} { log local0. "outside the HTTP Request scope" } if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } if {[catch {log local0. "server connected variable $SC"}] != 0} { log local0. "outside the Server Connected scope" } }

I found CLIENT_ACCEPTED and CLIENT_CLOSED shared a variable scope separate from everything else. Before testing, I half expected SERVER_CONNECTED to share scope with CLIENT_ACCEPTED, but I was wrong.


If I was give the power to pick a new feature, but I could choose only one, it would be the conn variable scope. At least for the purpose of this discussion.



Hi Jeremy,


both, the handcrafted $static::cmp_hash_array() as well as the auto discovered $static::local_tmm_subtable examples should work well.


The values of the $static::cmp_hash_array([TMM::cmp_unit]) needs to be discovered manually by doing some timing test. Once the values for each TMM core are figured out, the array can be created once and remain unchanged until the numbers of TMM cores is getting changed (aka. RMA unit, changed vCMP provisioning, etc.).


The overhead of the if { [catch { Try access to $static::local_tmm_subtable }] } then { Discover $static::local_tmm_subtable } is very small and performs even slightly better then using a [info exists static::local_tmm_subtable] syntax. Also keep in mind that the rather complex discovery code path, need to run only a single time on each TMM core. Subsequent request on the same TMM, will be able to reuse $static::local_tmm_subtable just fine and won't cause any [catch] exemptions anymore.


In the end the handcrafted approach will run just marginal faster than the auto-discover example.


Your "TMM::hash" ideas.


Right now I don't see any valuable usecases for such commands.


Your "table" ideas.


It would be indeed a cool thing to have a [table -local_tmm] command switch to bypass hash-routings to simply stick to the local TMM core. This would allow data storage on a per-TMM level. It would be slightly slower than using $static::namespace variables, but with added functionality of a build-in garbage collection (via -timeout / -lifetime switches).


Your "New Variable Scope" ideas


I've checked the outcome of [info level]. The underlying TCP connection and the individual HTTP/2 request are both running in level = 2. This means that TCL can't uplevel/upvar from the HTTP/2 request/response into the TCP connection. I guess a introduction of a new TCL variable scope which spanns the individual HTTP/2 connections and the underlying TCP connection should be very difficult to implement and I even wonder if this would even be possible without rewriting TCL way too much... 😞


Cheers, Kai



This behavior has changed, since LTM v14.1, as stated in the HTTP/2 wiki page

The behaviour is now more similar to the HTTP/1 behaviour. So the variables set before the HTTP_REQUEST event are now accessible. Actually, it seems to be more a "copy" of these variables that are accessible, as modifying them will not modify the original variables in the originating namespace.

Version history
Last update:
‎19-Dec-2016 07:29
Updated by: