application delivery
40167 TopicsCheckpoint Web Smartconsole behind reverse proxy.
Does anyone have any experience at trying (and hopefully suceeding) to put a Checkpoint (CP) FW Provider-1 based web smartconsole behind a reverse proxy. The thing is that CP use local IP addresses to identify one of a selection of management module instances. And they use webtransport/websockets to connect from these mgmt modules back to a browser for displaying FW policies and log data etc. That all seems fairly OK but they don't anchor it using the connection ID and so the raw IPs (of what they call the domain blade/instance) get passed to the browser. But we would prefer to NAT/hide/reIP the server (domain) side IPs and not have the internal server/domain IPs sent along to the browser. Part of the conversation, and some wrapper text from me, from the server to the client follows: *** We wish to use access to various customer domains using the /smartconsole web interface. But the access has to be behind a reverse proxy (F5 vIP) and after the initial logon using the CMA IP behind a vIP (so address the browser sees is a service public one) you get a screen where the domain is listed and after selecting continue you get redirected seperately to the CMA IP in an internal JSON/javascript message. Hence breaking the attempt to have the CMA behind a reverse proxy. *** {"data":{"loginToDomain":{"transportOtt":"107ad894-253d-4638-aa31-1c3e7d23172a","transportUrl":"https://100.64.20.29:443/smartconsole/transport","__typename":"LoginToDomainResponse"}}} ***62Views0likes1Comment4600 rSeries tenant resizing and HA Dependency
Consider there is an HA between 2 tenant. rSeries Chassis 1 - Tenant1 - F5 VM ( Active F5 ) rSeries Chassis 2 -Tenant1 - F5 VM ( Standby F5 ) If Chassis 2 - Tenant 1 is resized for resource perspective ( cpu ) , and put the state into Deployed state , in F5 VM level , will it be automatically be part of HA or both tenant need to have same Hardware resource allocated ?22Views0likes1CommentNeed F5 iRules Consultant - HTTP Header Manipulation Issues
Looking for help with F5 BIG-IP iRules development for custom HTTP header processing. We're trying to modify incoming request headers based on client IP geolocation, but the iRule is causing connection timeouts on certain requests. The logic works for most traffic but specific user agents seem to trigger infinite loops in the header rewrite code. Need someone experienced with iRules scripting and HTTP event handling to debug the conditional logic. Seeking 2-3 hours remote troubleshooting to identify and fix the timeout issues. Must be resolved by Thursday for production deployment.1View0likes0CommentsDOSl7 reset learning database voor automatic mode
Dear DOS protectors, how are we able to clear the auomatic Dosl7 learning statistics in case we want to relearn the traffic? Is there any clear/reset button for that or do we need to put the profile Off and On again to force it to relearn from scratch?23Views1like0CommentsPortal Access Application URI - ERR_EMPTY_RESPONSE
Scenario: Remote users need to access an externally hosted website that is whitelisted to my company's internal IP. Setup: Public facing webtop with resource assignment for a Portal Access Application URI Issue: Remote users can get to the external website through the webtop, which opens in a new browser tab, but when they click on the Login button, SSO redirection occurs and the page renders an ERR_EMPTY_RESPONSE message. Troubleshooting: Using dev tools I was able to determine the backend server was returning a x-frame-options: DENY error translating to "Do not allow this page to be loaded inside a frame". Not sure where to go from here.39Views0likes4CommentsF5 iRule Reverse Proxy, rewrite, redirect
Hello everyone, We currently have a scenario where a URL is no longer available and needs to be (redirected). The starting point is when https://company.com/tool is accessed, it should (redirect) to https://x.x.x.x/tool. Unfortunately, the (redirected) website doesn't have an FQDN, so it needs to be (redirected) to the IP address. Of course, https://company.com/tool should remain in the browser. Is this possible? A reverse proxy approach? Could someone provide me an example iRule? THX76Views0likes7CommentsDealing with iRule $variables for HTTP2 workload while HTTP MRF Router is enabled
Hi Folks, I like to start a discussion on how to deal with iRule $variables, which are traversing independed iRule events, in combination with HTTP2 workload and HTTP MRF Router enabled (e.g. HTTP2 Full-Proxy mode or optionally for HTTP2 Gateway mode). In detail I like to discuss the observation and some solutions while doing those things within an iRule: Analyse a received HTTP2 request Switch Pools based on the requested HTTP2 request Switch Server-Side SSL Profiles based on the HTTP2 request Inject a SNI-Value to the Server-Side CLIENTHELLO based on the free-text value (e.g. [HTTP::host]). For HTTP/1.x workload this was basically bread and butter stuff and didn't required me to understand any rocket science. But in HTTP2 those rather simple tasks have somehow changed... Some background information on HTTP2 iRules In HTTP2 a single TCP connection will be used to carry multiple HTTP2 streams in parallel. - Client-Side TCP Connection - HTTP2 stream ID 1 (Request for /) - HTTP2 stream ID 3 (Request for /favicon.ico) - HTTP2 stream ID 5 (Request for /backgroung.jpg) - ... To handle such parallel HTTP2 streams on a single TCP connection, LTM got some interesting changes for HTTP2 workloads, which affects the way $variables are seperated and used within iRules. The iRule snipped below will ilustrate the change... when CLIENT_ACCEPTED { set var_conn 1 } when HTTP_REQUEST { incr var_conn log local0.debug $var_conn } In the traditional HTTP/1.x world, the number stored in the $var_conn variable would have been increased with every keep-alive'ed HTTP request send through the same TCP connection. But this is not the case when dealing with HTTP2 workload while the HTTP MRF Router is enabled. The iRule snipped above will always log "2" independently of the total number of HTTP2 streams already send over the very same TCP connection. When HTTP MRF Router is enabled, an individual HTTP2 stream will always create its own "child" $varibale environment while using the TCP connection environment as a carbon copy to inherit its $variables. The $variables within a given HTTP2 stream are staying isolated for the entire duration of the HTTP2 stream (including its Response) and wont mix up with $varibales of other HTTP2 streams or the $variables of the underlying TCP connection. So far so good... Info: The text above is only valid for HTTP MRF Router enabled HTTP2 workload. HTTP2 gateway mode without HTTP MRF Router enabled behaves slighly different. When HTTP MRF Router is disabled, an individual HTTP2 stream will be assigned to a child environment based on a bunch of "concurrently active HTTP2 streams" . The observation and problems outlined below are not valid for scenarios where HTTP MRF Router is disabled. Without using the HTTP MRF Router you wont run into the issues discussed below... The "awkwardness" of the HTTP MRF Router A challeging scenario (which has caused me sleepless nights) is show below... when CLIENT_ACCEPTED { set conn_var 1 } when HTTP_REQUEST { set ssl_profile "MyCustomSSLProfile" } when SERVER_CONNECTED { if { [info exists ssl_profile] } then { catch { SSL::profile $ssl_profile } } else { log local0.debug "No SSL Profile | Vars: [info vars]" } } The log output will be always "No SSL Profile | Vars: conn_var". The SERVER_CONNECTED event does somehow not have access to the $ssl_profile variable, which was just set by the HTTP_REQUEST event. This implicitly means, that the SERVER_CONNECTED event is not getting executed in the carbon copied $variable environment of the HTTP2 stream which has triggered the LB decission to open a new connection.. Lets try to figure this out in which $variable environment the SERVER_CONNECTED event is executed... when CLIENT_ACCEPTED { set x "Hello" } when HTTP_REQUEST { set logstring "" if { [info exists x] } then { lappend logstring $x } if { [info exists y] } then { lappend logstring $y } log local0.debug $logstring } when SERVER_CONNECTED { set y "World" } The log output of the iRule above will be "Hello" on the first HTTP2 stream request and "Hello World" on consecutive HTTP2 streams which are received after the SERVER_CONNECTED event has been executed. I immediately thought: "Okay, we must be back on the TCP connection layer environment during the SERVER_CONNECTED event!?!" The first HTTP2 stream gets a carbon copy of the TCP connection environment after the CLIENT_ACCEPTED was executed with only $x set, then the SERVER_CONNECTED event adds $y to the TCP connection environment and subsequent HTTP2 streams getting then a carbon copy of the TCP connection environment with $x and $y set. Sounds somehow reasonable.... but then I tried this... when HTTP_REQUEST { if { [info exists y] } then { log local0.debug $y } else { log local0.debug "???" } } when SERVER_CONNECTED { set y "Hello World" } The log output will be "???" on the initial HTTP2 stream (as expected). But after the SERVER_CONNECTED has been executed, the log will still continue output "???" on every subsequent HTTP2 stream (duh! wait? what?). This behavior would basically mean that the SERVER_CONNECTED event is (in contrast to what I initially thought) not executed in the original $variable environment of the underlying TCP connection. At this point, I can only assume what is happening behind the scenes: The SERVER_CONNECTED event is also running a carbon copy environment, where the orginal $variable environment of our TCP connection gets copied/ref-linked into (but ONLY if any $varibales where set) and any changes made to the carbon copy environment would become copied/ref-linked back to the orginal $variable environment of our TCP connection (but only if any $varibale where initially set). With enough imagination this sounds at least explainable... but seriously... is it really working like that? Note: At least SERVER_CONNECTED and all SERVERSSL_* related events are affected by this behavior. I did not tested if other events are affected too. Question 1: If someone has insights what is happening here, or have other creative ideas to explain the outcome of my observations, I would be more than happy to get some feedback? Its driving me nuts to get fouled by the iRules I've shown... Lets discuss some solutions... Disclaimer: Don't use any of the iRule code below in your production environment unless you understand the impact. I crashed my dev environment more than once while exploring possiblities. Right now its way too experimental to recommend anything to use in production environment. And please dont blame me if you missed to read this warning... 😉 I already reached the point where I've simply accepted the fact, that you can't pass a regular $variable between a given HTTP2 stream and the SERVER_CONNECTED or SERVERSSL_* events. But i still need my job done and started to explore possibilities to interact between a HTTP2 stream and the server-side events. My favorite solution so far exports an $array() varibale from the HTTP2 stream during LB_SELECTED to a [table -subtable] stored on the local TMM core. The SERVER_CONNECTED event will then lookup the [table -subtable] and restore the $array() variable. when CLIENT_ACCEPTED { ################################################# # Define a connection specific label set conn(label) "[IP::client_addr]|TCP::client_port|[IP::local_addr]:[TCP::local_port]" } when HTTP_REQUEST { ################################################# # Clean vars from previous requests unset -nocomplain temp conf ################################################# # Define values for export set conf(SSL_Profile) "/Common/serverssl" set conf(SNI_Value) [HTTP::host] } when LB_SELECTED { ################################################# # Export conf() to local TMM subtable if { [info exists conf] } then { if { [catch { ################################################# # Try to export conf() to local TMM subtable table set -subtable $static::local_tmm_subtable \ "$conn(label)|[HTTP2::stream]" \ [array get conf] \ indef \ 30 }] } then { ################################################# # Discover subtable on local TMM core (once after reboot) set tmm(table_iterations) [expr { [TMM::cmp_count] * 7 }] for { set tmm(x) 0 } { $tmm(x) < $tmm(table_iterations) } { incr tmm(x) } { set tmm(start_timestamp) [clock clicks] table lookup -subtable "tmm_local_$tmm(x)" [clock clicks] set tmm(stop_timestamp) [clock clicks] set tmm_times([expr { $tmm(stop_timestamp) - $tmm(start_timestamp) }]) $tmm(x) } set static::local_tmm_subtable "tmm_local_$tmm_times([lindex [lsort -increasing -integer [array names tmm_times]] 0])" ################################################# # Restart export of conf() to local TMM subtable table set -subtable $static::local_tmm_subtable \ "$conn(label)|[HTTP2::stream]" \ [array get conf] \ indef \ 30 } } } when SERVER_CONNECTED { ################################################# # Import conf() from local TMM subtable clientside { catch { array set conf [table lookup \ -subtable $static::local_tmm_subtable \ "$conn(label)|[HTTP2::stream]"] } } ################################################# # Import conf() from local TMM subtable if { [info exists conf(SSL_Profile)] } then { catch { SSL::profile $conf(SSL_Profile) } } else { SSL::profile disable } } when SERVERSSL_CLIENTHELLO_SEND { ################################################# # Inject SNI Value based on conf() variable if { [info exists conf(SNI_Value)] } then { SSL::extensions insert [binary format SSScSa* \ 0 \ [expr { [set temp(sni_length) [string length $conf(SNI_Value)]] + 5 }] \ [expr { $temp(sni_length) + 3 }] \ 0 \ $temp(sni_length) \ $conf(SNI_Value)] } } Beside of the slightly awkward approach to store things in a [table -subtable] to interact between the iRule events. An error message will be raised everytime you save the iRule. Dec 15 22:17:44 kw-f5-dev.itacs.de warning mcpd[5551]: 01071859:4: Warning generated : /Common/HTTP2_FrontEnd:167: warning: [The following errors were not caught before. Please correct the script in order to avoid future disruption. "{badEventContext {command is not valid in current event context (SERVER_CONNECTED)} {5213 13}}"5105 132][clientside { During iRule execution it seems to be absolutely fine to call the [clientside] command during SERVER_CONNECTED event to access the [HTTP2::stream] id which tiggered the LB_SELECTED event. Question 2: Do you know other approches to deal with the outlinied MRF HTTP Router awkwardness? Do you have any doubts that the approach above may run stable? Do you have any tips how to improve the code? Should I be concerned that MCPD complaining about the syntax, or simply wrap [clientside] into an [eval $cmd] to trick out MCPD? I know the post got basically "tl;dr" long, but this problem bothers me pretty much. A customer is already waiting for a stable solution... 😞 Cheers, KaiSolved3.2KViews0likes15Comments