pool
53 TopicsNode/server is not showing on F5
Hello all , I am new in F5 and want to understand things how it is running on F5 While tracing a server it is showing me behind F5 with a vlan but when i logged to F5 and search this server in node list it is not showing there. So , be curious why this server is not showing there. bash-2.03$ traceroute 10.52.24.20 traceroute to 10.52.24.20 (10.62.34.20), 30 hops max, 40 byte packets 1 1xx.xx.xxx.249 (1xx.xx.xxx.249) 0.717 ms 0.540 ms 0.584 ms 2 10.xx.xx.129 (10.xx.xxx.129) 0.434 ms 0.343 ms 10.xx.xxx.133 (10.xx.xxx.133) 0.342 ms 3 10.xxx.x.26 (10.xx.x.26) 0.572 ms 0.481 ms 0.472 ms 4 services-s.itsec.asb (10.xx.x.xx) 0.826 ms 0.887 ms 0.717 ms 5 abc-dcg-lbs-03_v528.noc.xyz.com (10.62.32.13) 0.944 ms 0.907 ms 11.329 ms <<------------- F5 and vlan 6 mnops09-pr.wby1-stg.abc.com (10.52.24.20) 1.323 ms 1.654 ms 1.469 ms ============= output (tmos)# show ltm node 10.52.24.20 01020036:3: The requested Node (/Common/10.52.24.20) was not found.115Views0likes1CommentBIG-IP DNS: Check Status Of Multiple Monitors Against Pool Member
Good day, everyone! Within the LTM platform, if a Pool is configured with "Min 1 of" with multiple monitors, you can check the status per monitor viatmsh show ltm monitor <name>, or you can click the Pool member in the TMUI and it will show you the status of each monitor for that member. I cannot seem to locate a similar function on the GTM/BIG-IP DNS platform. We'd typically use this methodology when transitioning to a new type of monitor, where we can passively test connectivity without the potential for impact prior to removing the previous monitor. Does anyone have a way through tmsh or the TMUI where you can check an individual pool member's status against the multiple monitors configured for its pool? Thanks, all!167Views0likes2Commentspool members can't connect to another Virtual Server
Hello, for sure this is a problem already addressed but I have an issue with a client that is part of a pool behind a Vip. If from this client I attempt a connection to another Vip I get no response while the connection works in case I make the connection to my Vip. My client route to the Vip network is the F5 interface. Is there something wrong? I am attaching a diagram that is maybe better than a thousand words....Solved768Views0likes3CommentsCreating and managing priority groups with iControl
I am attempting to configure a special load balancing strategy that will be based on priority groups. Essentially I want my pool of n-nodes to have 2 priority groups (PGs): blue and green. Each PG has n/2 pool members in it (hence if the pool has 10 nodes, 5 are in the "blue" PG and 5 are in the "green" PG). At any given time, one of the two PGs will have a higher priority (blue or green). I simply want the PG with the higher priority to be served traffic (and, within the PG, all nodes being round robined). Hence, if the blue PG has a priority value of, say, 4, and green's value is 2, then F5 should only serve traffic to the blue PG nodes, and should round robin within that PG. If the priorities/values for the PGs are swapped, and now green's value is 4 and blue's value is 2, now only the green nodes are served traffic, and in round robin fashion. Etc. To do this I need to: Programmatically create the blue/green PGs in the first place Programmatically set the priorities of each PG (say, initialize blue to 4 and green to 2) Programmatically get the priorities of each PG I found this article which I believe helps me accomplish the last two items (except I do have a question about it), but am still at a loss as to how to programmatically create PGs and assign nodes to them. So I ask: * What iControl API methods do I engage to create PGs and assign nodes to them? * If LocalLBPool.set_member_priority is what I need to set PG priorities, I'm confused about the args I should be passing into it. I would have expected the argument to take the name of the PG to set the priority for. Instead it takes a list of pool names, and respective nodes and priorities to set within those pools. This leads me to believe that PGs are more of a UI construct (in the F5 web app), and that the iControl API just sets priorities individually. Any thoughts/ideas about my questions?202Views0likes0CommentsiRule change pool for only one request
I want an iRule that looks like this: when HTTP_REQUEST { switch -glob [HTTP::path] { "/cgi-bin/*" { pool cgi_pool } } } However, I find that this sometimes creates problems. I suspect that it's due to persistent HTTP connections, and once that connection makes a request to /cgi-bin/* once, all future requests use the new pool. (If someone can confirm or deny that, that would be great.) So I modified the iRule to look like this: when HTTP_REQUEST { switch -glob [HTTP::path] { "/cgi-bin/*" { pool cgi_pool } default { pool default_pool } } } Which seems to work. But then that causes problems when I have another iRule that also changes the pool. If that other iRule comes first, its changes get overridden by this one, and that gets to be easily forgettable management overhead. So my question is: Is it possible to have a pool change be in effect for only one particular HTTP request and not have to manually select the default pool for all of the other requests? I'd be fine with terminating the HTTP session after the oddball request, but I can't see a simple way to tell the iRule to complete this one request, and then close the connection.246Views0likes1CommentMySQL active connection never bleed off to other pool member
I am running galera MySQL behind F5 with performance Layer 4 type and i have setup 3 mysql node in pool member with Priority so only 1 mysql node will be used and other two will be standby. So everything was good but i found today when i shutdown Primary node which was active and i found my application break and when i have checked logs found: (2006, "MySQL server has gone away (error(104, 'Connection reset by peer'))") So solution was restart application, look like active member mysql connection not bleeding off to other pool member, what is wrong with my setup?1.5KViews0likes13CommentsPool Member Status and how QTYPE works?
Hello, i wrote a small Irule for show the Pool Member status as json. whenHTTP_REQUEST{ if{[stringtolower[HTTP::host]]eq"f5_status.XXXX.com"}{ #updatePoollist, enter Pool names setpoolname"P_X_http80P_XX_http80P_XXX_https6443" #updateRootPartitionforreplacement setstringmap"\"%1234\"\"\:\"" setpoolnummer1 setjson"\{\"status\"\:\{\"time\"\:\"[clockformat[clockseconds]-format"%Y-%b-%dT%H:%M:%S%Z"]\"," foreachpool_n$poolname{ setlist_all"" setlist_up"" setlist_down"" setmember"" setmember_l"" foreachmember[members-list$pool_n]{ setmember_l"[stringmap$stringmap$member]" appendlist_all"\"$member_l\"," } setmember"" setmember_l"" foreachmember[members-list$pool_n]{ if{!([active_members-list$pool_n]contains$member)}{ setmember_l"[stringmap$stringmap$member]" appendlist_down"\"$member_l\"," } } setmember"" setmember_l"" foreachmember[active_members-list$pool_n]{ setmember_l"[stringmap$stringmap$member]" appendlist_up"\"$member_l\"," } setcount_a[active_members$pool_n] setcount[members$pool_n] setcount_d[expr{$count}-{$count_a}] appendjson"\"pool$poolnummer\"\:\{\"name\"\:\"$pool_n\",\"membercountactive\"\:$count_a,\"membercountdown\"\:$count_d,\"membercountall\"\:$count,\"memberactive\"\:\[[stringtrimright$list_up","]\],\"memberdown\"\:\[[stringtrimright$list_down","]\],\"memberall\"\:\[[stringtrimright$list_all","]\]\}," incrpoolnummer } setjson[stringtrimright$json","] appendjson"\}\}" HTTP::respond200content"$json""Content-Type""application/json" } } But i found that new implementation #v12 change to include QTYPE members [-list] [QTYPE] <poolName> [blue green yellow red gray] https://clouddocs.f5.com/api/irules/members.html or https://clouddocs.f5.com/api/irules/active_members.html Anyone know how this work or have a working irule with this "qtype"? I would like to see a example!396Views0likes1CommentiRule to close an established connection
I have tcp (not http) based service where client connections are permanent. By that I mean that once a connection to a pool member gets established its stays there 24x7. The pool has 2 pool members configured with priority group. The first pool member has priority 2 and the second one priority 1, with a Less than '1' value for Priority Group Activation. The pool also has the setting of 'Reject' for 'Action On Service Down'. That takes care of any scenario where a pool member is marked down by health monitors. Whenever a highest priority pool is marked down by health monitors all established connections to that pool members get closed automatically. The client applications immediately try to reconnect and get established connections to the second pool member with the lower priority. So far, everything is exactly what we want to accomplish. The challenge comes when the higher priority pool member is marked up/available once again. We're looking for an automatic way to close the already established connections to the lower priority pool member as soon as the higher priority pool member becomes available. Is there a way to do so? Not sure what event I should use for an already established connection. First ones that came to mind were LB_SELECTED and CLIENT_ACCEPTED. So far, I've tried the following options without any results: when LB_SELECTED { if { [LB::status pool poolname member 10.0.0.1 80 ] equals "up" and [IP::addr [LB::server addr] equals 10.0.0.2] } { reject } } when LB_SELECTED { if { [LB::status pool poolname member 10.0.0.1 80 ] equals "up" and [IP::addr [LB::server addr] equals 10.0.0.2] } { LB::reselect pool poolname member 10.0.0.1 } }1.3KViews0likes5CommentsNo traffic going to pool, but works with different name
I'm seeing an odd issue on our LTM where traffic will not flow to one of my pools. It's inside an irule, but the issue isn't with the rule itself. In a bit of desperation, I created a new pool which was the exact same as the problem pool, except with a '_Test' appended to the name. I updated the irule with the new pool name and traffic flowed as expected. I then deleted the original pool, saved the config, re-created it, modified the irule with the original pool name and once again no traffic. The stats are 0's across the board for the pool, no traffic even attempting to go to it. It seems odd that the name of the pool would matter, but I can't come up with any other explanation. Anyone seen something like this? For what it's worth, I'm on 11.6.0 HF1 ltm pool /D04TS/DP_REST_FIREWALL_Servers { members { /D04TS/DP-1:8149 { address X.X.X.X } } monitor /Common/gateway_icmp } ltm pool /D04TS/DP_REST_FIREWALL_Servers_Test { members { /D04TS/DP-1:8149 { address X.X.X.X } } monitor /Common/gateway_icmp }Solved441Views0likes7CommentsSelecting pool in iRule fails
Hello, do you have any idea, why selecting pool in iRule fails after last update of licence? Previous status: I had some iRules to select correct pool and it worked. Status now: when there is mentioned pool in any iRule, "Connection reset by peer" is always returned. For now, I temporarily use pool www_pool to serve all requests and they are served. However, when I add one simple iRule: when HTTP_REQUEST { pool www_pool } then, request are no longer served and F5 resets the connection. Real server does not get anything on its target port (or it cannot be seen by tcpdump). Thanks for any suggestions.477Views0likes9Comments