pools
16 TopicsConnecting a AWS Cloudfront Distribution Pool/Node to an F5 iApp
Hi there, I was wondering if I could get some advice on connecting up AWS Cloudfront Distribution Pool/Node to an F5 iApp. The iApp in question has a default pool of on premises servers but we have a requirement in that for a specific URL path then we instead forward onto a AWS Cloudfront distribution. The below is a snippet from the irule we currently have configured: when CLIENT_ACCEPTED { SSL::disable serverside } when HTTP_REQUEST { if {([HTTP::uri] starts_with "/falc/")} { SSL::enable serverside HTTP::header replace Host "d2s8lx2sdbghef.cloudfront.net" pool d2s8lx2sdbghef.cloudfront.net } } The pool and the FQDN node are showing green which means F5 can resolve the addresses. However when we attempt to go to a URL which starts with the prefix above instead of being direct to the Cloudfront distribution (and the S3 content behind) we instead get the following: Check and the distribution has redirect HTTP to HTTPS configured on the behaviour and we are attempting to replace the Host with the matching distribution. I was wondering if this has been encountered by anyone before, if anyone has attempted anything similar and if able to get it working how that was achieved. Thank you in advance of any assistance that may provide.20Views0likes0CommentsPriority Group activation between 10 servers
Hi All, Is it possible to enable the priority group activation between 10 servers- condition is that at a time any one server should up, if it goes down any other one server become active and serve the request. Meaning out of 10 servers 1 should serve the request on F5 LTM.29Views0likes1CommentiRule catching HTTP_REQUEST made to other Virtual Server
I'm experiencing a problem with apparently conflicting LTM iRules. I have two Virtual Servers set up (let's name one VS_TEST and the other VS_PREP ). Each has a different iRule applied to it ( iRule_TEST and iRule_PREP ). These iRules perform the same function - they intercept incoming HTTP requests, extract some data, and then forward the data to an application running on the corresponding Pool ( POOL_TEST , POOL_PREP ) in the form of a HTTP GET . The application returns either Allow or Deny , informing the iRule whether to allow the request to pass through, or to reject it. Each Pool has only one node. Normally these iRules behave correctly. A request made to VS_TEST will be handled by iRule_TEST and send information to the application running on the single node in POOL_TEST . There is a second type of request made to the Virtual Servers, let's call these password requests as they retrieve a password that is randomly generated by the server. I need to intercept the response by the sever and extract the password, and then send it to the same application as before. I add HTTP_RESPONSE and HTTP_RESPONSE_DATA events to the iRules. However, when I add HTTP_RESPONSE and HTTP_RESPONSE_DATA events to both iRules, there is a conflict which depends on the order the iRules are updated. For example, if I update iRule_TEST first, followed by iRule_PREP : Requests made to VS_TEST are handled by iRule_TEST iRule_TEST sends the data of the request to the single node in POOL_PREP ! Requests made to VS_PREP are handled by iRule_PREP and the data of the request is sent to the single node in POOL_PREP , as expected. How is this possible when both POOL_TEST and the IP:port of its corresponding node are explicitly mentioned in iRule_TEST ? The exact opposite happens if I update iRule_TEST first. iRule_TEST when RULE_INIT { set ip:port of destination node (specific to TEST) set static::serveripport "192.168.10.80:80" } when HTTP_REQUEST { if {([HTTP::query] starts_with "message=")} { This is a request we want to intercept log local0. "Raw request: [HTTP::query]" Extract the actual message regexp {(message\=)(.*)} [HTTP::query] -> garbage query Connect to node. Use catch to handle errors. Check if return value is not null. if {[catch {connect -timeout 1000 -idle 30 -status conn_status $static::serveripport} conn_id] == 0 && $conn_id ne ""} { Send TCP payload to application set data "GET /Service.svc/checkmessage?message=$query" set send_info [send -timeout 1000 -status send_status $conn_id $data] Receive reply from application set recv_info [recv -timeout 1000 -status recv_status $conn_id] Allow or deny request based on application response if {$recv_info contains "Allow"} { pool POOL_TEST } elseif {$recv_info contains "Deny"} { reject } Tidy up close $conn_id } else { reject } } } Update below when HTTP_RESPONSE { Collect all 200 responses if {[HTTP::status == 200} { set content_length [HTTP::header "Content-Length"] HTTP::collect $content_length } } when HTTP_RESPONSE_DATA { if {[catch {binary scan [HTTP::payload] H* payload_hex} error] ne 0} { log local0. "Error whilst binary scanning response: $error" } else { if {some hex string matches} { collect password from response and set to $password Connect to node. Use catch to handle errors. Check if return value is not null. if {[catch {connect -timeout 1000 -idle 30 -status conn_status $static::serveripport} conn_id] == 0 && $conn_id ne ""} { Send TCP payload to application set data "GET /Service.svc/submitresponse?password=$password" set send_info [send -timeout 1000 -status send_status $conn_id $data] Tidy up close $conn_id } } HTTP::release } iRule_PREP is identical, save for references to POOL_TEST and the static::serveripport address.Solved471Views0likes3CommentsiRule change pool for only one request
I want an iRule that looks like this: when HTTP_REQUEST { switch -glob [HTTP::path] { "/cgi-bin/*" { pool cgi_pool } } } However, I find that this sometimes creates problems. I suspect that it's due to persistent HTTP connections, and once that connection makes a request to /cgi-bin/* once, all future requests use the new pool. (If someone can confirm or deny that, that would be great.) So I modified the iRule to look like this: when HTTP_REQUEST { switch -glob [HTTP::path] { "/cgi-bin/*" { pool cgi_pool } default { pool default_pool } } } Which seems to work. But then that causes problems when I have another iRule that also changes the pool. If that other iRule comes first, its changes get overridden by this one, and that gets to be easily forgettable management overhead. So my question is: Is it possible to have a pool change be in effect for only one particular HTTP request and not have to manually select the default pool for all of the other requests? I'd be fine with terminating the HTTP session after the oddball request, but I can't see a simple way to tell the iRule to complete this one request, and then close the connection.246Views0likes1CommentHow do I enable and disable pool members using iControlREST
Hi I'm trying to the following PUT request to enable a pool member (IP's masked out): https://10.102.xx.xx/mgmt/tm/ltm/pool/~QA~pool_vcoza_portal_UAT/members/~QA~10.102.xx.xx:9001 My request has the following body and Content-Type: application/json { "state" : "up" } I get the following result back: {"code":400,"message":"invalid property value \"state\":\"up\"","errorStack":[]} However, doing the same kind of thing for other properties, dynamicRatio for example, does work. Please help!760Views0likes8CommentsiControl REST - Finding a pool with a specific member
Hi, Would anyone know of a shortcut to finding a pool (or pools) with a specific member? In the WebGUI (v13.1.3) when viewing the properties of a node there is a Pool Membership tab. It will show what pools the node is a member of. I would like to use an iControl REST call to determine the same thing. I have the code to get the list of pools with their basic properties. One of the pool's properties is the memberReference link. I can use this link to get a list of pool members with properties including memberName (nodeName:port), IPaddress, etc. Potentially I could query every pool (500+) and build a list of pool members with the properties like: PoolName, memberName, IPaddress.... Then I could scan this list for either the node or the node's IP address. It's a bit of a pig. Just thought there might be a quicker way. ...Patrick1.3KViews0likes2CommentsSSL issue
Hello there, We have a F5 LTM and a virtual server configured to a server in port 443, the topology is: Computer --> F5 LTM --> switch --> server When we try to connect to the server through https we saw the message "Connection reset" in the browser, but if we try to connect without passing the F5 the connection is successful. We don't have configured any SSL client profile or server. This is the configuration on F5: #Virtual Server #________________________________________________________________________________ ltm virtual /Common/Server1 { destination /Common/10.1.5.X:443 ip-protocol tcp mask 255.255.255.255 pool /Common/Server1 profiles { /Common/tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled } #________________________________________________________________________________ #Pools #________________________________________________________________________________ ltm pool /Common/Server1 { members { /Common/10.1.7.X:443 { address 10.1.7.X } } monitor /Common/https_443 } #________________________________________________________________________________ #Profiles #________________________________________________________________________________ # -Default Profile- ltm profile tcp tcp { ack-on-push enabled close-wait-timeout 5 congestion-control high-speed deferred-accept disabled delayed-acks enabled ecn disabled fin-wait-timeout 5 idle-timeout 300 keep-alive-interval 1800 limited-transmit enabled max-retrans 8 nagle disabled proxy-buffer-high 49152 proxy-buffer-low 32768 proxy-mss disabled proxy-options disabled receive-window-size 65535 reset-on-timeout enabled selective-acks enabled send-buffer-size 65535 slow-start enabled syn-max-retrans 3 time-wait-recycle enabled time-wait-timeout 2000 timestamps enabled } As you can see, we don't have any SSL client or server profile and we tried changing "translate-port" to disabled and "Source Address Translation" to auto map but none of these work. Also we made a tcpdump and we can see the TCP Reset from 10.1.7.X (tcpdump.png) and some curl (curl.png), openssl (openssl.png and openssl2.png) and a telnet (telnet.png). Hope you can help us to find out what's going on. Thank you.415Views1like1CommentHA Group and pools
Hi, According to what I read it seems that HA Group (HAG) is best suited for monitoring trunks and clusters (on VIPRION). Still pools can be used as well. My question is about what is best practice for configuring pools in HAG. I know article Best practices for the HA group feature - there is not a lot here except from avoiding pools with members that are not stable (can go down and up rapidly). I can see two scenarios when using pool makes sense: Same pool used on each device in HAG - each device has completely separate network path to members. So it is possible that member is down on one device and up on another. Quite simple to configure. Separate pools on each device pointing to different members - that is one that I am not sure how to implement. Second case is showed in this video Setting up HA Groups (part 2 of 2). One of the condition is that each device should have not only separate pool (A, B, C) but as well separate VS using those pools like that: BIG-IPA - VSA - PoolA BIG-IPB - VSB - PoolB ... As well easy to configure but there is one catch - in case of failover VS IP will change, so connections will be lost and there is some external method necessary to direct clients to new VIP - Am I right? So not a perfect solution. I can imagine that there is a way to switch pools assigned to the same VS depending on which device is Active (using iRule with HA::status and tcl_platform(machine)) - but is that good idea? Sure connections will be reset as well but there is no need to redirect clients to other VIP. Any other way to have same VIP on every device but using separate pools for HAG config? Any other scenarios for using pools in HAG? Piotr269Views0likes2CommentsHA Group and pools
Hi, According to what I read it seems that HA Group (HAG) is best suited for monitoring trunks and clusters (on VIPRION). Still pools can be used as well. My question is about what is best practice for configuring pools in HAG. I know article Best practices for the HA group feature - there is not a lot here except from avoiding pools with members that are not stable (can go down and up rapidly). I can see two scenarios when using pool makes sense: Same pool used on each device in HAG - each device has completely separate network path to members. So it is possible that member is down on one device and up on another. Quite simple to configure. Separate pools on each device pointing to different members - that is one that I am not sure how to implement. Second case is showed in this video Setting up HA Groups (part 2 of 2). One of the condition is that each device should have not only separate pool (A, B, C) but as well separate VS using those pools like that: BIG-IPA - VSA - PoolA BIG-IPB - VSB - PoolB ... As well easy to configure but there is one catch - in case of failover VS IP will change, so connections will be lost and there is some external method necessary to direct clients to new VIP - Am I right? So not a perfect solution. I can imagine that there is a way to switch pools assigned to the same VS depending on which device is Active (using iRule with HA::status and tcl_platform(machine)) - but is that good idea? Sure connections will be reset as well but there is no need to redirect clients to other VIP. Any other way to have same VIP on every device but using separate pools for HAG config? Any other scenarios for using pools in HAG? Piotr260Views0likes0Comments