pool members
5 TopicsiControl REST: Working with Pool Members
Since iControl REST is the new kid on the block, it's bound to start getting some of the same questions we've addressed with traditional iControl. One of these oft-asked and misunderstood questions is about enabling/disabling pool members. The original poster in this case is actually facing a syntax issue with the allowable state issues in the json payload, but I figured I'd kill two birds with one stone here and address both concerns going forward. DevCentral member Rudi posted in Q&A asking for some assistance with disabling a pool member. He was able to change some properties on the pool member, but trying to change the state resulted in this error: {"code":400,"message":"invalid property value \"state\":\"up\"","errorStack":[]} The REST interface is complaining about an invalid property, mainline, the "up" state. If you do a query against an "up" pool member, you can see that the state is "unchecked" instead of up. { "state": "unchecked", "connectionLimit": 0, "address": "192.168.101.11", "selfLink": "https://localhost/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.101.11:8000?ver=11.5.1", "generation": 63, "fullPath": "/Common/192.168.101.11:8000", "partition": "Common", "name": "192.168.101.11:8000", "kind": "tm:ltm:pool:members:membersstate", "dynamicRatio": 1, "inheritProfile": "enabled", "logging": "disabled", "monitor": "default", "priorityGroup": 0, "rateLimit": "disabled", "ratio": 1, "session": "user-enabled" } You might also note the session keyword in the pool member attributes as well. This is the key that controls the forced offline behavior. The mappings for these two values (state and session) to the GUI state of a pool member are as follows GUI: Enabled {"state": "unchecked", "session": "user-enabled"} GUI: Disabled {"state": "unchecked", "session": "user-disabled"} GUI: Forced Offline {"state": "user-down", "session": "user-disabled"} So to change a value on a pool member, you need to use the PUT method, and specify in the URL the pool, pool name, and the pool member: curl -sk -u admin:admin https://192.168.6.5/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.101.11:8000/ \ -H "Content-Type: application/json" -X PUT -d '{"state": "user-down", "session": "user-disabled"}' This results in changed state and session for this pool member: { "state": "user-down", "connectionLimit": 0, "address": "192.168.101.11", "selfLink": "https://localhost/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.101.11:8000?ver=11.5.1", "generation": 63, "fullPath": "/Common/192.168.101.11:8000", "partition": "Common", "name": "192.168.101.11:8000", "kind": "tm:ltm:pool:members:membersstate", "dynamicRatio": 1, "inheritProfile": "enabled", "logging": "disabled", "monitor": "default", "priorityGroup": 0, "rateLimit": "disabled", "ratio": 1, "session": "user-disabled" } Best tip I can give with discovering the nuances of iControl REST is to query existing objects, and change their default values around in the GUI and re-query to see what the values are supposed to be. Happy coding!2.7KViews0likes10CommentsLTM pool showing as Red(down) and unable to connect before deadline
I have a couple pool members that are not responding to 443 monitors all of the sudden, It has worked fine forever but not it is in a down state while the port 80 monitors continue to stay green. is this most likely a health monitor issue? I wouldn't think so since it operated properly beforehand. the troubling part to me is that is down on 2 separate nodes both for 443. The only thing I can think is that it could be on the customers side but I am really not sure why it would be 443 only. any insight would be very helpful338Views0likes2CommentsBest way to set a pool member as Disable from Irule
Hi all, I have a requirement that needs pool members to be disabled when a certain number of concurrent sessions has been reached to stop new sessions going to the member ( memory locking errors in the app occur above certain user counts and they would rather hard deny users then bring everything down). I have written an irule that does all this tracking and management but what i cant figure out is a simple good way to disable nodes. This irule will be applied to many pools and 100 odd nodes so it cant be hard coded. Right now in my dev environment i am lopping over pool members and if any are over threshold using LB:DOWN on that pool member. After looping i then forcing a LB::select, this works because i am relying on a health monitor up delay to keep all the members i just marked as down, down. Thus this logic gets repeated for every new session, and i have nodes constantly going up and down. i would much rather mark them as disable and bring them back to active if the session count drop below threshold then what i am doing now but i cant figure out at good way. i cant see how you would do it with icall as all the example i see have hardcoded triggers. So what will work best? i assume some kind of sideband connection? also running 11.6 HF5 All advice and ideas welcome! cheers446Views0likes3CommentsHow to configure members in pool... Wildcard or specific ports?
I'd like to know the best practice for putting members in a pool. I have a two virtual servers setup for an application, one for tcp 80 and one for port 50000. Both of these virtual servers use the same pool. Is it better to have the members as 192.168.1.10:80 and 192.168.1.10:50000 or just use 192.168.1.10:0? I will have multiple servers in the pool for load balancing.289Views0likes1Comment