Forum Discussion
Dealing with iRule $variables for HTTP2 workload while HTTP MRF Router is enabled
Hi Folks,
I like to start a discussion on how to deal with iRule $variables, which are traversing independed iRule events, in combination with HTTP2 workload and HTTP MRF Router enabled (e.g. HTTP2 Full-Proxy mode or optionally for HTTP2 Gateway mode).
In detail I like to discuss the observation and some solutions while doing those things within an iRule:
- Analyse a received HTTP2 request
- Switch Pools based on the requested HTTP2 request
- Switch Server-Side SSL Profiles based on the HTTP2 request
- Inject a SNI-Value to the Server-Side CLIENTHELLO based on the free-text value (e.g. [HTTP::host]).
For HTTP/1.x workload this was basically bread and butter stuff and didn't required me to understand any rocket science. But in HTTP2 those rather simple tasks have somehow changed...
Some background information on HTTP2 iRules
In HTTP2 a single TCP connection will be used to carry multiple HTTP2 streams in parallel.
- Client-Side TCP Connection
- HTTP2 stream ID 1 (Request for /)
- HTTP2 stream ID 3 (Request for /favicon.ico)
- HTTP2 stream ID 5 (Request for /backgroung.jpg)
- ...
To handle such parallel HTTP2 streams on a single TCP connection, LTM got some interesting changes for HTTP2 workloads, which affects the way $variables are seperated and used within iRules.
The iRule snipped below will ilustrate the change...
when CLIENT_ACCEPTED {
set var_conn 1
}
when HTTP_REQUEST {
incr var_conn
log local0.debug $var_conn
}
In the traditional HTTP/1.x world, the number stored in the $var_conn variable would have been increased with every keep-alive'ed HTTP request send through the same TCP connection.
But this is not the case when dealing with HTTP2 workload while the HTTP MRF Router is enabled. The iRule snipped above will always log "2" independently of the total number of HTTP2 streams already send over the very same TCP connection.
When HTTP MRF Router is enabled, an individual HTTP2 stream will always create its own "child" $varibale environment while using the TCP connection environment as a carbon copy to inherit its $variables. The $variables within a given HTTP2 stream are staying isolated for the entire duration of the HTTP2 stream (including its Response) and wont mix up with $varibales of other HTTP2 streams or the $variables of the underlying TCP connection.
So far so good...
Info: The text above is only valid for HTTP MRF Router enabled HTTP2 workload. HTTP2 gateway mode without HTTP MRF Router enabled behaves slighly different. When HTTP MRF Router is disabled, an individual HTTP2 stream will be assigned to a child environment based on a bunch of "concurrently active HTTP2 streams" . The observation and problems outlined below are not valid for scenarios where HTTP MRF Router is disabled. Without using the HTTP MRF Router you wont run into the issues discussed below...
The "awkwardness" of the HTTP MRF Router
A challeging scenario (which has caused me sleepless nights) is show below...
when CLIENT_ACCEPTED {
set conn_var 1
}
when HTTP_REQUEST {
set ssl_profile "MyCustomSSLProfile"
}
when SERVER_CONNECTED {
if { [info exists ssl_profile] } then {
catch { SSL::profile $ssl_profile }
} else {
log local0.debug "No SSL Profile | Vars: [info vars]"
}
}
The log output will be always "No SSL Profile | Vars: conn_var". The SERVER_CONNECTED event does somehow not have access to the $ssl_profile variable, which was just set by the HTTP_REQUEST event.
This implicitly means, that the SERVER_CONNECTED event is not getting executed in the carbon copied $variable environment of the HTTP2 stream which has triggered the LB decission to open a new connection..
Lets try to figure this out in which $variable environment the SERVER_CONNECTED event is executed...
when CLIENT_ACCEPTED {
set x "Hello"
}
when HTTP_REQUEST {
set logstring ""
if { [info exists x] } then { lappend logstring $x }
if { [info exists y] } then { lappend logstring $y }
log local0.debug $logstring
}
when SERVER_CONNECTED {
set y "World"
}
The log output of the iRule above will be "Hello" on the first HTTP2 stream request and "Hello World" on consecutive HTTP2 streams which are received after the SERVER_CONNECTED event has been executed.
I immediately thought: "Okay, we must be back on the TCP connection layer environment during the SERVER_CONNECTED event!?!"
The first HTTP2 stream gets a carbon copy of the TCP connection environment after the CLIENT_ACCEPTED was executed with only $x set, then the SERVER_CONNECTED event adds $y to the TCP connection environment and subsequent HTTP2 streams getting then a carbon copy of the TCP connection environment with $x and $y set.
Sounds somehow reasonable.... but then I tried this...
when HTTP_REQUEST {
if { [info exists y] } then {
log local0.debug $y
} else {
log local0.debug "???"
}
}
when SERVER_CONNECTED {
set y "Hello World"
}
The log output will be "???" on the initial HTTP2 stream (as expected). But after the SERVER_CONNECTED has been executed, the log will still continue output "???" on every subsequent HTTP2 stream (duh! wait? what?).
This behavior would basically mean that the SERVER_CONNECTED event is (in contrast to what I initially thought) not executed in the original $variable environment of the underlying TCP connection.
At this point, I can only assume what is happening behind the scenes: The SERVER_CONNECTED event is also running a carbon copy environment, where the orginal $variable environment of our TCP connection gets copied/ref-linked into (but ONLY if any $varibales where set) and any changes made to the carbon copy environment would become copied/ref-linked back to the orginal $variable environment of our TCP connection (but only if any $varibale where initially set).
With enough imagination this sounds at least explainable... but seriously... is it really working like that?
Note: At least SERVER_CONNECTED and all SERVERSSL_* related events are affected by this behavior. I did not tested if other events are affected too.
Question 1: If someone has insights what is happening here, or have other creative ideas to explain the outcome of my observations, I would be more than happy to get some feedback? Its driving me nuts to get fouled by the iRules I've shown...
Lets discuss some solutions...
Disclaimer: Don't use any of the iRule code below in your production environment unless you understand the impact. I crashed my dev environment more than once while exploring possiblities. Right now its way too experimental to recommend anything to use in production environment. And please dont blame me if you missed to read this warning... 😉
I already reached the point where I've simply accepted the fact, that you can't pass a regular $variable between a given HTTP2 stream and the SERVER_CONNECTED or SERVERSSL_* events. But i still need my job done and started to explore possibilities to interact between a HTTP2 stream and the server-side events.
My favorite solution so far exports an $array() varibale from the HTTP2 stream during LB_SELECTED to a [table -subtable] stored on the local TMM core. The SERVER_CONNECTED event will then lookup the [table -subtable] and restore the $array() variable.
when CLIENT_ACCEPTED {
#################################################
# Define a connection specific label
set conn(label) "[IP::client_addr]|TCP::client_port|[IP::local_addr]:[TCP::local_port]"
}
when HTTP_REQUEST {
#################################################
# Clean vars from previous requests
unset -nocomplain temp conf
#################################################
# Define values for export
set conf(SSL_Profile) "/Common/serverssl"
set conf(SNI_Value) [HTTP::host]
}
when LB_SELECTED {
#################################################
# Export conf() to local TMM subtable
if { [info exists conf] } then {
if { [catch {
#################################################
# Try to export conf() to local TMM subtable
table set -subtable $static::local_tmm_subtable \
"$conn(label)|[HTTP2::stream]" \
[array get conf] \
indef \
30
}] } then {
#################################################
# Discover subtable on local TMM core (once after reboot)
set tmm(table_iterations) [expr { [TMM::cmp_count] * 7 }]
for { set tmm(x) 0 } { $tmm(x) < $tmm(table_iterations) } { incr tmm(x) } {
set tmm(start_timestamp) [clock clicks]
table lookup -subtable "tmm_local_$tmm(x)" [clock clicks]
set tmm(stop_timestamp) [clock clicks]
set tmm_times([expr { $tmm(stop_timestamp) - $tmm(start_timestamp) }]) $tmm(x)
}
set static::local_tmm_subtable "tmm_local_$tmm_times([lindex [lsort -increasing -integer [array names tmm_times]] 0])"
#################################################
# Restart export of conf() to local TMM subtable
table set -subtable $static::local_tmm_subtable \
"$conn(label)|[HTTP2::stream]" \
[array get conf] \
indef \
30
}
}
}
when SERVER_CONNECTED {
#################################################
# Import conf() from local TMM subtable
clientside {
catch {
array set conf [table lookup \
-subtable $static::local_tmm_subtable \
"$conn(label)|[HTTP2::stream]"]
}
}
#################################################
# Import conf() from local TMM subtable
if { [info exists conf(SSL_Profile)] } then {
catch { SSL::profile $conf(SSL_Profile) }
} else {
SSL::profile disable
}
}
when SERVERSSL_CLIENTHELLO_SEND {
#################################################
# Inject SNI Value based on conf() variable
if { [info exists conf(SNI_Value)] } then {
SSL::extensions insert [binary format SSScSa* \
0 \
[expr { [set temp(sni_length) [string length $conf(SNI_Value)]] + 5 }] \
[expr { $temp(sni_length) + 3 }] \
0 \
$temp(sni_length) \
$conf(SNI_Value)]
}
}
Beside of the slightly awkward approach to store things in a [table -subtable] to interact between the iRule events. An error message will be raised everytime you save the iRule.
Dec 15 22:17:44 kw-f5-dev.itacs.de warning mcpd[5551]: 01071859:4: Warning generated : /Common/HTTP2_FrontEnd:167: warning: [The following errors were not caught before. Please correct the script in order to avoid future disruption. "{badEventContext {command is not valid in current event context (SERVER_CONNECTED)} {5213 13}}"5105 132][clientside {
During iRule execution it seems to be absolutely fine to call the [clientside] command during SERVER_CONNECTED event to access the [HTTP2::stream] id which tiggered the LB_SELECTED event.
Question 2: Do you know other approches to deal with the outlinied MRF HTTP Router awkwardness? Do you have any doubts that the approach above may run stable? Do you have any tips how to improve the code? Should I be concerned that MCPD complaining about the syntax, or simply wrap [clientside] into an [eval $cmd] to trick out MCPD?
I know the post got basically "tl;dr" long, but this problem bothers me pretty much. A customer is already waiting for a stable solution... 😞
Cheers, Kai
Hi Pete,
Somehow found the cause of the variable glitch an a final workaround for my problem.
If you set any local variables during the CLIENT_ACCEPTED event then the outlined strange variable beharior happens.
If you dont set any variable during the CLIENT_ACCEPTED event, then the HTTP_REQUEST event is able to pass variables to the SERVER_CONNECTED and SERVERSSL_* events and you can also pass variable out of those events to the HTTP_REQUEST_RELEASE event.
Looks like a very annoying bug. I would love to open a new case, but my MVP support contract just has become outdated.... 🤐
Cheers, Kai
- PeteWhiteEmployee
You are correct - the important point to note about MRF ( Message Routing Framework) is that client and server flows are in different contexts. This is actually the great strength of the MRF framework - a message could arrive on TCP and depart on UDP to multiple destinations. Loadbalancing is done based on message, not layer 4 connection. See https://clouddocs.f5.com/api/irules/MR.html
"MR iRule commands operate within a Tcl context associated with the connection flow between the endpoint and the MR proxy. The ingress and egress parts of a message’s journey therefore operate in separate Tcl contexts. The Tcl context contains the Tcl variables and execution state of the currently executing iRule event. Only one iRule event can execute at a time on a connection flow, therefore messages queue to execute their iRule events.
In many MR protocols, messages belong to independent transactions that are carried over the same network connection flow. It is highly desirable for messages sharing a connection flow to execute their iRules independently of other messages. This provides the following advantages and behavior changes:
- A message does not need to wait for an unrelated message to complete an event in order to execute its own event.
- Messages sharing a connection flow may exit the flow in a different order than they entered.
- Tcl variables cannot be overwritten between events by another message."
To answer your question, you can either use the table as you have done, or use the MR::store and MR::restore commands to access clientside and serverside info from the opposite context. See https://clouddocs.f5.com/api/irules/MR__store.html
Hey PeteWhite,
finally found some time to experiment with MR_INGRESS and MR_EGRESS event and the MR::store and MR::restore commands.
Unfortunately those two commands are neither able to copy/link a given variable from lets say the "HTTP_REQUEST" event to the "SERVER_CONNECTED" event. And also not from "SERVER_CONNECTED" to lets say "HTTP_REQUEST_RELEASE" event.
SERVER_CONNECTED and SERVERSSL_* behaving slightly weirdo. Could you please double check the information you've provided?
This simple test illustrates the issue...
when CLIENT_ACCEPTED { set CON_ID [TMM::cmp_unit][clock clicks] set REQ_ID "" log local0.debug "$CON_ID|$REQ_ID" } when HTTP_REQUEST { set REQ_ID [TMM::cmp_unit][clock clicks] log local0.debug "$CON_ID|$REQ_ID" } when REMAINING_EVENTS { log local0.debug "$CON_ID|$REQ_ID" }
Log output:
tmm1[17801]: Rule <CLIENT_ACCEPTED>: 11674705152203080| tmm1[17801]: Rule <HTTP_REQUEST>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <MR_INGRESS>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <LB_SELECTED>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <SERVER_CONNECTED>: 11674705152203080| tmm1[17801]: Rule <SERVERSSL_CLIENTHELLO_SEND>: 11674705152203080| tmm1[17801]: Rule <SERVERSSL_SERVERHELLO>: 11674705152203080| tmm1[17801]: Rule <SERVERSSL_SERVERCERT>: 11674705152203080| tmm1[17801]: Rule <SERVERSSL_HANDSHAKE>: 11674705152203080| tmm1[17801]: Rule <MR_EGRESS>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <HTTP_REQUEST_RELEASE>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <HTTP_RESPONSE>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <MR_INGRESS>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <MR_EGRESS>: 11674705152203080|11674705152236713 tmm1[17801]: Rule <HTTP_RESPONSE_RELEASE>: 11674705152203080|11674705152236713
Adding MR::store and/or MR::restore wont make any difference. The SERVER_CONNECTED and SERVERSSL_* events wont see the $var added by HTTP_REQUEST...
Cheers, Kai
Hi Pete,
Somehow found the cause of the variable glitch an a final workaround for my problem.
If you set any local variables during the CLIENT_ACCEPTED event then the outlined strange variable beharior happens.
If you dont set any variable during the CLIENT_ACCEPTED event, then the HTTP_REQUEST event is able to pass variables to the SERVER_CONNECTED and SERVERSSL_* events and you can also pass variable out of those events to the HTTP_REQUEST_RELEASE event.
Looks like a very annoying bug. I would love to open a new case, but my MVP support contract just has become outdated.... 🤐
Cheers, Kai
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com