SPDY/HTTP2 Profile Impact on Variable Use
When using SPDY/HTTP2 profile, TCL variables set before the HTTP_REQUEST event are not carried over to further events. This is by design as the current iRule/TCL implementation is not capable of handling multiple concurrent events, which may be they case with SPDY/HTTP2 multiplexing. This is easily reproducible with this simple iRule:
when CLIENT_ACCEPTED { set default_pool [LB::server pool] } when HTTP_REQUEST { log local0. "HTTP_REQUEST EVENT Client [IP::client_addr]:[TCP::client_port] - $default_pool -" # - can't read "default_pool": no such variable while executing "log local0. "HTTP_REQUEST EVENT -- $default_pool --" }
Storing values in a table works perfectly well and can be used to work around the issue as shown in the iRule below.
when CLIENT_ACCEPTED { table set [IP::client_addr]:[TCP::client_port] [LB::server pool] } when HTTP_REQUEST { set default_pool [table lookup [IP::client_addr]:[TCP::client_port]] log local0. "POOL: |$default_pool|" }
This information is also available in the wiki on the HTTP2 namespace command list page.
- Marcel_VankoNimbostratus
new iRules home is https://clouddocs.f5.com/api/irules/
correct link is https://clouddocs.f5.com/api/irules/HTTP2.html
- Antoine_PrevostNimbostratus
This behavior has changed, since LTM v14.1, as stated in the HTTP/2 wiki page
The behaviour is now more similar to the HTTP/1 behaviour. So the variables set before the HTTP_REQUEST event are now accessible. Actually, it seems to be more a "copy" of these variables that are accessible, as modifying them will not modify the original variables in the originating namespace.
Hi Jeremy,
both, the handcrafted
as well as the auto discovered$static::cmp_hash_array()
examples should work well.$static::local_tmm_subtable
The values of the
needs to be discovered manually by doing some timing test. Once the values for each TMM core are figured out, the array can be created once and remain unchanged until the numbers of TMM cores is getting changed (aka. RMA unit, changed vCMP provisioning, etc.).$static::cmp_hash_array([TMM::cmp_unit])
The overhead of the
is very small and performs even slightly better then using aif { [catch { Try access to $static::local_tmm_subtable }] } then { Discover $static::local_tmm_subtable }
syntax. Also keep in mind that the rather complex discovery code path, need to run only a single time on each TMM core. Subsequent request on the same TMM, will be able to reuse[info exists static::local_tmm_subtable]
just fine and won't cause any$static::local_tmm_subtable
exemptions anymore.[catch]
In the end the handcrafted approach will run just marginal faster than the auto-discover example.
Your "TMM::hash" ideas.
Right now I don't see any valuable usecases for such commands.
Your "table" ideas.
It would be indeed a cool thing to have a
command switch to bypass hash-routings to simply stick to the local TMM core. This would allow data storage on a per-TMM level. It would be slightly slower than using $static::namespace variables, but with added functionality of a build-in garbage collection (via[table -local_tmm]
/-timeout
switches).-lifetime
Your "New Variable Scope" ideas
I've checked the outcome of
. The underlying TCP connection and the individual HTTP/2 request are both running in level = 2. This means that TCL can't uplevel/upvar from the HTTP/2 request/response into the TCP connection. I guess a introduction of a new TCL variable scope which spanns the individual HTTP/2 connections and the underlying TCP connection should be very difficult to implement and I even wonder if this would even be possible without rewriting TCL way too much... ๐[info level]
Cheers, Kai
Kai,
I didn't realize you meant for there to be additional iRule logic to determine the owning TMM. It does make the
a more palatable option if lookups were guaranteed to be local.table -subtable
There are some cons to OneConnect, but reverting back to the default pool for every request is something I prefer. I have submitted RFEs to address some things I don't like and have several other features I'd like to see added, but haven't taken the time to submit those yet. Overall, in my experience OneConnect has helped far more often than not.
Maybe it's the original topic that bugs me, or I'm just in a feature request mood because this discussion sparked a few other features ideas:
1. TMM::hash DescriptionCommand used to hash a value to calculate the owning TMM.
Syntax -TMM::hash
TMM::hash table
TMM::hash table -subtable
TMM::hash persist uie
etc.TMM::hash persist source_addr
That would greatly simplify the process of determining the owning TMM instance. It would likely incur little CPU overhead and probably not too complicated to develop. It wouldn't be the most used command, but I would have found it useful on a few occassions. I could see it coming in handy for testing and troubleshooting.
2. New
commandtable
What about an additional
command that is TMM specific, maybetable
or preferably something shorter and more catchy. It would be similar totable_local
variables, but with all the features ofstatic
which would be ideal for storing the default pool. It may be useful in other situations too. 3. New Variable Scopetable
One of your earlier comments you mentioned possibly using something like
oruplevel
to get at variables at the connection level. I tried every way I could, but unless there is some sort of secret command, I don't believe it is possible. As I mentioned, it's like each concurrent HTTP2 session is its own connection. I would propose adding an additional variable scope, similar toupvar
, but specific to connections. Something likestatic
,conn
orclient
.session
It would be used something like this.
when CLIENT_ACCEPTED { set conn::default_pool [LB::server pool] } when HTTP_REQUEST { if {[HTTP2::active]} { log local0. "default pool is ${conn::default_pool}" } }
I did do some testing a while back to get a little understanding of the variable scope in HTTP2. I did something like this:
when CLIENT_ACCEPTED { set CA "client accepted variable" } when HTTP_REQUEST { set REQ "HTTP request variable" if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } } when SERVER_CONNECTED { set SC "server connected variable" if {[catch {log local0. "HTTP request variable $REQ"}] != 0} { log local0. "outside the HTTP Request scope" } if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } } when HTTP_RESPONSE { if {[catch {log local0. "HTTP request variable $REQ"}] != 0} { log local0. "outside the HTTP Request scope" } if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } if {[catch {log local0. "server connected variable $SC"}] != 0} { log local0. "outside the Server Connected scope" } } when CLIENT_CLOSED { if {[catch {log local0. "HTTP request variable $REQ"}] != 0} { log local0. "outside the HTTP Request scope" } if {[catch {log local0. "client accepted variable $CA"}] != 0} { log local0. "outside the Client Accepted scope" } if {[catch {log local0. "server connected variable $SC"}] != 0} { log local0. "outside the Server Connected scope" } }
I found
andCLIENT_ACCEPTED
shared a variable scope separate from everything else. Before testing, I half expectedCLIENT_CLOSED
to share scope withSERVER_CONNECTED
, but I was wrong.CLIENT_ACCEPTED
If I was give the power to pick a new feature, but I could choose only one, it would be the
variable scope. At least for the purpose of this discussion.conn
Hi Jeremy,
A given
(and its containing keys) will always stick to the same TMM core. This approach is required to enable-subtable
to query just a single TMM core. Without this behavior, the a[table keys]
command would need to query every single TMM core to get all the key names/counts. This would be a performance nightmare...[table keys]
Unfortunately you canโt control/manipulate the hash-routing of an individual
name. But you can observe the existing hash-routing behavior by clocking the elapsed-subtable
for a single[clock clicks]
execution. A local TMM table query will in this case require significantly less clicks and therefor allows a very easy differentiation.[table -subtable]
To simplify the discovery process and management you can also use a data-plane code block to discover a
value that gets hash-routed to the local TMM instance. See the sample below how this could be implemented based on Jason initial iRule functionality.-subtable
when RULE_INIT { set static::local_tmm_maxclicks 15 } when CLIENT_ACCEPTED { if { [catch { table set -subtable $static::local_tmm_subtable "[IP::client_addr]:[TCP::client_port]" [LB::server pool] }] } then { foreach char [split 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ""] { set timestamp [clock clicks] table lookup -subtable "table_hash_$char" dummy if { [expr { [clock clicks] - $timestamp }] < $static::local_tmm_maxclicks } then { log local0.debug "The subtable \"table_hash_$char\" has taken \"[expr { [clock clicks] - $timestamp }]\" clicks. The table is hash-routed to the local TMM core." set static::local_tmm_subtable "table_hash_$char" table set -subtable $static::local_tmm_subtable "[IP::client_addr]:[TCP::client_port]" [LB::server pool] break } else { log local0.debug "The subtable \"table_hash_$char\" has taken \"[expr { [clock clicks] - $timestamp }]\" clicks. The table is hash-routed to a different TMM core." } } } } when HTTP_REQUEST { set default_pool [table lookup -subtable $static::local_tmm_subtable [IP::client_addr]:[TCP::client_port]] log local0. "POOL: |$default_pool|" }
Note: Your VS:: namespace is a pretty cool idea! I would second that feature request! ;-)*
Note: Well, OneConnect has its own pros but also serious cons. And to be honest, I very much dislike the way OneConnects reverts the current pool selection after every single request... ๐
Cheers, Kai
Kai,
You are correct, the scenario would be a rare case. Generally it's not something I would even be concerned about myself, but because it is possible, it's good to know it could happen for that rare case.
I didn't think about coercing the F5 to assign a specific owning TMM for the subtable. I would be a little surprised if the TMM responsible for creating the subtable is always the TMM that owns the subtable. Is there a feature of subtables that allows the name of
to using TMM 0 andtable_hash_1
to use TMM 1? I'm not even sure how to test the owning TMM for atable_hash_2
entry, at least not directly.table
I would still shy away from
, static variables should accomplish the task. And if the purpose is to select a default pool like in this example iRule, I would still prefer OneConnect over attempting to store the default pool for later use.table
when CLIENT_ACCEPTED { set static::default_pool([virtual name]) [LB::server pool] } when HTTP_REQUEST { switch -glob -- [HTTP::uri] { /images/ { pool image_pool } default { pool $static::default_pool([virtual name]) } } }
If it's desired to keep them cleaned up, contents could checked periodically with something like this.
when RULE_INIT { if {[array exists static::default_pool]} { foreach VS [array get static::default_pool] { log local0. "[lindex $VS 0] [lindex $VS 1]" } } else { log local0. "default_pool array does not yet exist" } }
Whenever loaded or updated, that rule would log each virtual and associated pool once per-TMM. One could even set an alertd event to trigger a script for these messages, though that may be a bit overkill. In my opinion it's not worth it, only 1 copy per-TMM of the array, even with hundreds of virtual servers it's not a significant memory concern.
New Feature Maybe?I haven't written up an RFE for it yet, but if we could query the pool on the VS directly, we wouldn't have to rely on
ortable
variables to get the default pool. Maybe add a VS namespace (or whatever it's called) exposing some key config on the VS, similar to thestatic
commands.PROFILE
VS::pool
VS::rules
VS::type
etc.VS::destination
Hi Jeremy,
I guess the deletion of a pool is a somewhat rare edge case... So i would say that both the
and[table]
approaches are somewhat stable solutions... ๐$static::array(key)
The
approach will be very effective if global or per VS configuration data with limited size needs to become stored (as required in Jasons example). But if the possible$static::array($key_name)
values are not limited, then it may be wise to implement some addtioanl limiters / garbage collections to control the memory consumption and/or to remove old entries.$key_name
In addition the
command usage of Jasons example can be additionally opimized to avoid cross TMM communication. The trick is to use a different but well selected[table]
instance for the individual TMM cores, where each -subtable will always become hash-routed to the local TMM core.[table -subtable]
when RULE_INIT { The cmp_hash_array() values are optimized for my 2 TMM core develoment unit array set static::cmp_hash_array { 0 "table_hash_1" 1 "table_hash_2" } } when CLIENT_ACCEPTED { table set -subtable $static::cmp_hash_array([TMM::cmp_unit]) "[IP::client_addr]:[TCP::client_port]" [LB::server pool] } when HTTP_REQUEST { set default_pool [table lookup -subtable $static::cmp_hash_array([TMM::cmp_unit]) "[IP::client_addr]:[TCP::client_port]"] log local0. "POOL: |$default_pool|" }
Cheers, Kai
- Table
I recommend against use of
simply because it suspends iRule processing (K12962) potentially impacting performance. Especially on an F5 with heavy traffic. There are occasions wheretable
is needed, but this is not one of those cases. static::variable (array)table
What Kai posted above should accomplish the task just fine. However, it is important to note this method slightly changes behavior causing a potential problem. The scenario is unlikely, but I believe deserves some attention. The use of table may also result in the same problem. I have not tested the historical method of setting the variable in
.CLIENT_ACCEPTED
To prevent this post from spanning pages of info, I won't go into the gory details. I'll just list steps demonstrating the problem.
Assuming there are 2 pools - pool_1/pool_2 Browser is a client connecting with HTTPv2.
- Configure VS with iRule performing default pool selection via static variable with pool_1.
- Make request from a browser to the VS to populate variable.
- Change the default pool on the VS to pool_2.
- Delete pool_1.
- Refresh the page from the browser. This results in a TCL error and a browser error (Chrome reported ERR_SPDY_PROTOCOL_ERROR).
The error persisted regardless of what I did until one of the following occurred:
- A new connection made on the same TMM instance to update the variable.
- Manually delete the connection on the F5 forcing my browser to open a new one.
- Wait for the connection to time-out.
If the purpose is to determine the default pool, adding a OneConnect profile to the VS can accomplish this. With a OneConnect profile, the load balancing decision occurs on each request. This results in the behavior of
returning the default pool as long as it is run prior to any selection command such asLB::server pool
.pool
This behavior can be demonstrated in a rule like this (assuming the VS has a oneconnect profile):
when HTTP_REQUEST { log local0. "should always be the default pool - [LB::server pool]" switch -glob -- [HTTP::path] { /images/* { pool image_caching } /app/* { pool main_app } *.js { pool javascript_storage } } }
This is true with the exception of a couple version where F5 "changed the behavior" : K00725997
Conditionally set variableAnother way to accomplish the task is to conditionally set the variable in
. This has the advantage of looking and behaving virtually identical to setting inHTTP_REQUEST
.CLIENT_ACCEPTED
How it works (I think)when HTTP_REQUEST { we don't care about non-HTTPv2 requests if {![HTTP2::active]} { return } if {![info exists default_pool]} { set default_pool [LB::server pool] } log local0. "This should always match the default pool on the VS - ${default_pool}" }
I haven't found official F5 information detailing variable scope and HTTP2, but I can explain what I've seen.
Each HTTP2 stream being processed concurrently receives its own scope for load balancing and iRule variables. Almost as if each concurrent session is its own little connection. This can be demonstrated/verified with an iRule like this:
when HTTP_REQUEST { we don't care about non-HTTPv2 requests if {![HTTP2::active]} { return } if {![info exists default_pool]} { set default_pool [LB::server pool] log local0. "default_pool variable set for conncurent stream - [HTTP2::concurrency], set to - ${default_pool}" } else { log local0. "default_pool variable exists on conncurent stream - [HTTP2::concurrency], set to - ${default_pool}" } }
As you refresh the page, logging indicates each concurrent session sets the variable only once.
If each conncurrent stream receives its own scope, it's logical to conclude setting a variable in each consumes more memory. However, when using HTTPv2, each client should only use a single TCP connection in lieu of concurrent TCP connections, so the trade off is setting a variable in each HTTP2 concurrent session as apposed to setting the variable once across multiple TCP connections.
- Nazir_52641Cirrus
Hi Jason,
Table access in CLIENT_CLOSED event can cause iRule event to suspend. It is better to use some static variables to avoid CLIENT_CLOSED getting suspended. Moreover we may have to use associative arrays / Hash maps so that any variable we are using is tied to the client connection.
- JRahmAdmin
that was the workaround provided by support, but other workarounds might be possible and/or more efficient. I'll investigate and let you know.