Forum Discussion
BigSuds: Different status on LocalLB.Pool.get_member_monitor_status
Hello,
I have a problem when I get status of members from pools.
Environment:
- A vcmp instance (BIG-IP 11.4.1 Build 675.0 Hotfix HF7)
- 2 pools: pool_test_01 and pool_test_02, with http monitor (same for both pools)
- 1 nodes: node001
- bigsuds: '1.0.1'
- node001 is marked down by monitor in pool_test_01
- node001 is marked as " Offline (Enabled) - Forced down" in pool_test_02
Which give to me :
LocalLB.Pool.get_member_monitor_status(['pool_test_01'], [[{'address': 'node001', 'port': 80}]])
=> [['MONITOR_STATUS_DOWN']] => ok, everything is coherent
LocalLB.Pool.get_member_monitor_status(['pool_test_02'], [[{'address': 'node001', 'port': 80}]])
=> [['MONITOR_STATUS_FORCED_DOWN']] => ok, still as expected
My Problem:
If I want to get the status of node001, from both pools at the same time, using:
LocalLB.Pool.get_member_monitor_status(['pool_test_01', 'pool_test_02'], [[{'address': 'node001', 'port': 80}]])
=> [['MONITOR_STATUS_DOWN', 'MONITOR_STATUS_DOWN']] But an expected result could be:
=> [['MONITOR_STATUS_DOWN', 'MONITOR_STATUS_FORCED_DOWN']]
So, I don't know if it's a bug in my code, in BigSuds, ... I have also try to set node name 2 times, like:
LocalLB.Pool.get_member_monitor_status(['pool_test_01', 'pool_test_02'], [[{'address': 'node001', 'port': 80},{'address': 'node001', 'port': 80}]])
But in this case I have : [['MONITOR_STATUS_DOWN', 'MONITOR_STATUS_DOWN', 'MONITOR_STATUS_DOWN', 'MONITOR_STATUS_DOWN']]
If I set the node 2 times like that: LocalLB.Pool.get_member_monitor_status(['pool_test_01', 'pool_test_02'], [[{'address': 'node001', 'port': 80}], [{'address': 'node001', 'port': 80}]])
I get: [['MONITOR_STATUS_DOWN', 'MONITOR_STATUS_DOWN'], ['MONITOR_STATUS_FORCED_DOWN', 'MONITOR_STATUS_FORCED_DOWN']]
And even if the last exemple could be a valid response, it sounds like a loop problem somewhere...
If someone has an idea on how to debug/fix, I'm interested :)
Have a good day :)
--
G2.
Try this:
LocalLB.Pool.get_member_monitor_status(['pool_test_01', 'pool_test_02'], [[{'address': 'node001', 'port': 80}],[{'address': 'node001', 'port': 80}]])
- mhite_60883Cirrocumulus
Try this:
LocalLB.Pool.get_member_monitor_status(['pool_test_01', 'pool_test_02'], [[{'address': 'node001', 'port': 80}],[{'address': 'node001', 'port': 80}]])
- Finally this is the correct syntax, as I said in my last post, I can't reproduce my strange behavior today. Thank you ;)
Hi Mhite,
Thanks for your answer. That's the last thing I tried and describe, but in this case, I have 2 pools and 1 node, so I should get 2 answers, but I got 4.
I know it's still parsable, and reversing algo is quiet simple, but it's a strange behavior. Or at least not the behavior I was expecting (more responses than questions ask to vcmp) :p
The global idea behind that case is to get status of all nodes in all pools at the same time.
Best Regards,
G2.
- mhite_60883CirrocumulusOh, my bad! Yeah, that does seem like very strange behavior! I don't have a good answer for you, sorry.
- osnetworks_6668Nimbostratus
Hi, I dont have an answer, just sharing my results trying to replicate your issue. In my case, I get a different outcome:
ENVIRONMENT
- BIG-IP 6900 11.3.0 Build 3131.0 (My test box which hasn't been upgraded to match our live box yet)
- 2 pools: pool1 and pool2, with http monitor set on both pools
- 1 nodes: member1
- bigsuds: '1.0.1'
CONTEXT
- member1 in pool1 is "Offline (Enabled) - Pool member has been marked down by a monitor"
- member1 in pool2 is "Forced Offline (Only active connections allowed)"
SCRIPT
pool1 = 'pool_test_01' pool2 = 'pool_test_02' member1 = 'node001' port = '80' print "" print 'pool1 member1 status' print pl.get_member_monitor_status(['/Common/' + pool1], [[{'address': member1, 'port': port}]]) print "" print 'pool2 member1 status' print pl.get_member_monitor_status(['/Common/' + pool2], [[{'address': member1, 'port': port}]]) print "" print 'pool1 and pool2 member1 status' print pl.get_member_monitor_status(['/Common/' + pool1, '/Common/' + pool2], [[{'address': member1, 'port': port}]])
OUTPUT
pool1 member1 status [['MONITOR_STATUS_DOWN']] pool2 member1 status [['MONITOR_STATUS_FORCED_DOWN']] pool1 and pool2 member1 status [['MONITOR_STATUS_DOWN']]
As you can see, individually, each monitor status is shown correctly. However, unlike yours, I only get the first pool's member status when querying both pools simultaneously.
- osnetworks_6668Nimbostratus
Just one more thought: When you specify 2 pools and one member, I would expect two results i.e. pool1+member1, pool2+member1. Similarly, when you specify two pools and two members (even if it's the same member repeated), I would expect 4 results i.e. pool1+member1, pool1+member2, pool2+member1, pool2+member2 so all your results appear to me to show the correct number of results.
I think your last example shows the correct output, however, the other results you are getting look inconsistent with what you would expect. One theory I have for this is that perhaps you left the default setting for "State Options" of "Apply new state to all pool member instances" enabled if you happened to be changing the State in-between tests? Annoyingly, this option only appears once you make select a different State radio button. Could this be the cause?
Hello,
I agree with you, if I have 2 pools and 2 nodes I expect to have 4 results, but passing 2 nodes was a workaround to have the results of the node1 within the 2 pools.
So if I have this configuration:
-
pool1:
- node1
- node2
-
pool2:
- node1
- node3
And I want to get status of pool1/node1, pool1/node2, pool2/node1, pool2/node3, I don't know how to get this 4 results with the syntaxe of get_member_monitor_status.
At this time I loop over my node/pool to perform one request per expected result.
By the way, I'm alone in a testing environment and I can assure you I'm not changing the state of nodes beetween tests :p
-
pool1:
- osnetworks_6668Nimbostratus
OK, so I have taken your configuration above (pool 1 contains nodes 1 & 2, pool 2 contains nodes1 & 3) and used the following code:
pool1 = 'pool_test_01' pool2 = 'pool_test_02' member1 = 'node001' member2 = 'node002' member3 = 'node003' port = '80' print "OUTPUT" print pl.get_member_monitor_status([pool1, pool2], [[{'address': member1, 'port': port}, {'address': member2, 'port': port}], [{'address': member1, 'port': port}, {'address': member3, 'port': port}]]) print pl.get_member_session_status([pool1, pool2], [[{'address': member1, 'port': port}, {'address': member2, 'port': port}], [{'address': member1, 'port': port}, {'address': member3, 'port': port}]])
This gives the desired results. Please note I have been changing the states of the nodes so the context is probably different than your original scenario). Please also note I have included both get_member_session_status and get_member_monitor_status as it is the combination of these two which provides the actual status options in the Web UI - see here for more details. In conclusion, it looks like you must specify the pool members for each pool, even if they are shared between pools.
I should say that I am just starting out with Python and iControl so I am on a very steep learning and there could be a better way that I am not aware of.
- osnetworks_6668Nimbostratus
I forgot to include the output from the above which is...
OUTPUT [['MONITOR_STATUS_DOWN', 'MONITOR_STATUS_FORCED_DOWN'], ['MONITOR_STATUS_FORCED_DOWN', 'MONITOR_STATUS_FORCED_DOWN']] [['SESSION_STATUS_ENABLED', 'SESSION_STATUS_FORCED_DISABLED'], ['SESSION_STATUS_FORCED_DISABLED', 'SESSION_STATUS_FORCED_DISABLED']]
Lots to review in these questions, and I'm not a big python guy so I'll do my best to describe how the APIs work and are parsed on the server side.
For this method:
MonitorStatus [] [] get_member_monitor_status( in String [] pool_names, in Common__AddressPort [] [] members );
It would help to look at how we would have created the API for a single pool
MonitorStatus [] get_member_monitor_status( in String pool_name, in Common__AddressPort [] members );
In that case, you pass in a single pool name and an array of pool members for that pool. The return is an array of MonitorStatus values, one for each pool member. That should make sense.
Now back to the multi-object API we have. The idea in the design was to allow you to do the same thing with "n" pools as you would with one in the single case. So for this, the parameters are a single array of pool names, and then a 2-d array of members (the first index in the array is just the ordinal for the pool, and the second is the array of items for each pool.
For your example of 2 pools with a single node:port, you still have to pass in the node:port individually for each pool. If you expand the arrays it would look like this in pseudo code
pool_names = array[2]; pool_names[0] = "pool_test_01" pool_names[1] = "pool_test_02"; members = array[2][]; members[0] = array[1]; members[0][0] = new AddressPort(); members[0][0].address = 'node001' members[0][0].port = 80 members[1] = array[1]; members[1][0] = new AddressPort() members[1][0].address = 'node001' members[1][0].port = 80 MonitorStatus [][] = get_member_monitor_status(pool_names, members);
On the server side, we first look at the "pool_names" parameter and loop over that array. Then for each entry, we take the associated 2'd dimension of the second array to get the list of members for the current pool.
In the first example above, you made the call without the second entry for the second pool. The server code, when iterating pool 2, found 0 members so ignored it.
I could give you some guidance on coding this in Perl or .NET but my python chops aren't there. Hopefully this gives you what you need to get the client parameters correct.
-Joe
Thanks to both of you for your responses 🙂
@Joe, that was my first understanding of how association of
(String[] pool_names, Common__AddressPort [][] members)
is perform in this function.
I was surprise by the result, and finally I owe you apologies,
I can't reproduce the strange behavior I had (4 results for 2 pools and 1 node). So it's a pretty good thing to me ... but now I have to find why I had this behavior yesterday.
By the way, thanks a lot to all of you 🙂
Best Regards,
G2
- Guillaume, thanks goes to you for the very detailed information in your question. A lot of the time I have to guess at what's going on, giving that much detail really helped. Hope this gets you going...
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com