http::release
2 TopicsiRule catching HTTP_REQUEST made to other Virtual Server
I'm experiencing a problem with apparently conflicting LTM iRules. I have two Virtual Servers set up (let's name one VS_TEST and the other VS_PREP ). Each has a different iRule applied to it ( iRule_TEST and iRule_PREP ). These iRules perform the same function - they intercept incoming HTTP requests, extract some data, and then forward the data to an application running on the corresponding Pool ( POOL_TEST , POOL_PREP ) in the form of a HTTP GET . The application returns either Allow or Deny , informing the iRule whether to allow the request to pass through, or to reject it. Each Pool has only one node. Normally these iRules behave correctly. A request made to VS_TEST will be handled by iRule_TEST and send information to the application running on the single node in POOL_TEST . There is a second type of request made to the Virtual Servers, let's call these password requests as they retrieve a password that is randomly generated by the server. I need to intercept the response by the sever and extract the password, and then send it to the same application as before. I add HTTP_RESPONSE and HTTP_RESPONSE_DATA events to the iRules. However, when I add HTTP_RESPONSE and HTTP_RESPONSE_DATA events to both iRules, there is a conflict which depends on the order the iRules are updated. For example, if I update iRule_TEST first, followed by iRule_PREP : Requests made to VS_TEST are handled by iRule_TEST iRule_TEST sends the data of the request to the single node in POOL_PREP ! Requests made to VS_PREP are handled by iRule_PREP and the data of the request is sent to the single node in POOL_PREP , as expected. How is this possible when both POOL_TEST and the IP:port of its corresponding node are explicitly mentioned in iRule_TEST ? The exact opposite happens if I update iRule_TEST first. iRule_TEST when RULE_INIT { set ip:port of destination node (specific to TEST) set static::serveripport "192.168.10.80:80" } when HTTP_REQUEST { if {([HTTP::query] starts_with "message=")} { This is a request we want to intercept log local0. "Raw request: [HTTP::query]" Extract the actual message regexp {(message\=)(.*)} [HTTP::query] -> garbage query Connect to node. Use catch to handle errors. Check if return value is not null. if {[catch {connect -timeout 1000 -idle 30 -status conn_status $static::serveripport} conn_id] == 0 && $conn_id ne ""} { Send TCP payload to application set data "GET /Service.svc/checkmessage?message=$query" set send_info [send -timeout 1000 -status send_status $conn_id $data] Receive reply from application set recv_info [recv -timeout 1000 -status recv_status $conn_id] Allow or deny request based on application response if {$recv_info contains "Allow"} { pool POOL_TEST } elseif {$recv_info contains "Deny"} { reject } Tidy up close $conn_id } else { reject } } } Update below when HTTP_RESPONSE { Collect all 200 responses if {[HTTP::status == 200} { set content_length [HTTP::header "Content-Length"] HTTP::collect $content_length } } when HTTP_RESPONSE_DATA { if {[catch {binary scan [HTTP::payload] H* payload_hex} error] ne 0} { log local0. "Error whilst binary scanning response: $error" } else { if {some hex string matches} { collect password from response and set to $password Connect to node. Use catch to handle errors. Check if return value is not null. if {[catch {connect -timeout 1000 -idle 30 -status conn_status $static::serveripport} conn_id] == 0 && $conn_id ne ""} { Send TCP payload to application set data "GET /Service.svc/submitresponse?password=$password" set send_info [send -timeout 1000 -status send_status $conn_id $data] Tidy up close $conn_id } } HTTP::release } iRule_PREP is identical, save for references to POOL_TEST and the static::serveripport address.Solved456Views0likes3CommentsDownstream LTM error processing POST proxied from upstream LTM
Strange error encountered. We have a traffic flow that goes: Browser -> LTM1 -> LTM2 -> pool of WebLogic servers We sporadically encounter timeouts on POST requests - tcpdump shows that the POST request makes it to LTM2, and LTM2 initiates a connection to a selected real server, but the POST operation does not complete, and eventually the WebLogic server times out the connection (with an error saying it can't parse the POST content; and the timeout occurs per a "POST read timeout" setting in WebLogic). Detailed iRule logging shows that when this occurs, LTM2 is unable to read the POSTed content ... when I do an HTTP::collect in an HTTP_REQUEST event, it fails to trigger an HTTP_REQUEST_DATA event. Everything appears correct - the Content-Length header is accurate, the POSTed content (per tcpdump) appears to be correct, the same as was received at LTM1, etc. But LTM2 simply doesn't read the content (apparently). There are no logged errors that I can find in the LTM log or anywhere else. Through sheer luck, I stumbled across a workaround - if I do an HTTP::collect in HTTP_REQUEST on LTM1, followed by an HTTP::release in HTTP_REQUEST_DATA ... it magically fixes LTM2's problem. Completely repeatable, take out the iRule on LTM1, the problem begins occurring again; put it in, and the problem goes away, and LTM2 is able to do a successful HTTP::collect/HTTP_REQUEST_DATA sequence. I have a case open with support, but they didn't have any feedback on it to this point. Has anyone encountered a similar situation? We're ok with leaving in this iRule-based fix, but would prefer not to have such a workaround in use. Details on the environment: LTM2 and LTM2 are both at 11.5.2, no hotfixes Both VIPs are SSL ones (though I converted LTM2's VIP to non-SSL, and it didn't change anything) LTM1 is using an Oracle OAM authentication integration via APM (though the OAM processing all occurs cleanly without error, per all logs on LTM1 and the OAM servers); LTM2 doesn't have APM LTM2 does a straight HTTP, non-SSL, connection to the WebLogic servers SNAT pools are in use on both LTM1 and LTM2 OneConnect is used throughout (though turning it off on either LTM1, LTM2 or both had no effect) Caching is disabled on both LTM1 and LTM2 Compression is enabled on both LTM1 and LTM2 (though turning it off on either LTM1 or LTM2 had no effect) Anecdotally, the problem may have gotten worse after we put a firewall between LTM2 and the WebLogic servers; but the firewall processing all looks completely clean, and it's not doing any HTTP inspection, just doing simple IP-based ACLs. Cisco ASA 5585 fw When the F5s are removed from the dataflow, and the browser goes directly to the WebLogic servers, the error does not occur Any thoughts?272Views0likes0Comments