bigip_pool
6 TopicsF5 force connection reset on pool member
I have two MySQL server behind F5 and i am using Priority group for active-passive so request will go to primary everytime until it's down. Question: primary failed and all my connection going to standby box - Good now primary is back - but still my client connected to standby DB ( because of persistent connection ) How do i force my pool member to tell drop all connection and go to new primary DB? problem is if some client keep writing data to standby it will create some kind of issue. force offline doesn't drop active connection2.6KViews0likes5Commentsansible - bigip_pool_member: cannot add pool member from "Common" to pool "Test"
Hi I'm running into an issue when I create pool member in /Common and create a pool in /Test, I'm unable to add the pool members to the pool using ansible bigip_pool_member. playbook # # Filename : pooltest.yaml # Date : 20 Jul 2020 # Author : Balaji Venkataraman (xbalaji) # Description : playbook to configure bigip pool with members in different partition - name: create bigip_pool and add members to it hosts: all connection: local gather_facts: False vars: remove_resources: false lx_pool_name: "xbltmpool" lx_provider: server: "{{inventory_hostname}}" user: "{{f5_username}}" password: "{{f5_password}}" validate_certs: False timeout: 30 pool_members: - "{{lx_pool_name}}-member-01.company.com" - "{{lx_pool_name}}-member-02.company.com" tasks: - name: set create or delete flag delegate_to: localhost set_fact: lx_state: "{% if remove_resources|lower == 'true' %}absent{% else %}present{% endif %}" lx_action: "{% if remove_resources|lower == 'true' %}delete{% else %}create{% endif %}" - name: "{{lx_action}} pool members" delegate_to: localhost bigip_node: name: "{{item}}" fqdn: "{{item}}" state: "{{lx_state}}" provider: "{{lx_provider}}" partition: "/Common" description: "ansible created LTM node - xbalaji" loop: "{{pool_members|flatten(1)}}" - name: "{{lx_action}} the pool" delegate_to: localhost bigip_pool: state: "{{lx_state}}" name: "{{lx_pool_name}}" partition: "/Test" lb_method: "round-robin" monitors: - "/Common/http" provider: "{{lx_provider}}" - name: "{{lx_action}} pool members to {{lx_pool_name}} " delegate_to: localhost bigip_pool_member: state: "{{lx_state}}" pool: "{{lx_pool_name}}" partition: "/Test" name: "{{item}}" fqdn: "{{item}}" port: "80" fqdn_auto_populate: "no" preserve_node: "yes" reuse_nodes: "yes" provider: "{{lx_provider}}" loop: "{{pool_members|flatten(1)}}" output: PLAY [create bigip_pool and add members to it] ********************************************************************************************************************** TASK [set create or delete flag] ************************************************************************************************************************************ ok: [f5adcfxioc01-apidev.net.pge.com] TASK [create pool members] ****************************************************************************************************************************************** changed: [f5adcfxioc01-apidev.net.pge.com] => (item=xbltmpool-member-01.company.com) changed: [f5adcfxioc01-apidev.net.pge.com] => (item=xbltmpool-member-02.company.com) TASK [create the pool] ********************************************************************************************************************************************** changed: [f5adcfxioc01-apidev.net.pge.com] TASK [create pool members to xbltmpool] ***************************************************************************************************************************** failed: [f5adcfxioc01-apidev.net.pge.com] (item=xbltmpool-member-01.company.com) => {"ansible_loop_var": "item", "changed": false, "item": "xbltmpool-member-01.company.com", "msg": "01070734:3: Configuration error: FQDN Node (/Test/xbltmpool-member-01.company.com:xbltmpool-member-01.company.com) already exists as Node (/Common/xbltmpool-member-01.company.com)"} failed: [f5adcfxioc01-apidev.net.pge.com] (item=xbltmpool-member-02.company.com) => {"ansible_loop_var": "item", "changed": false, "item": "xbltmpool-member-02.company.com", "msg": "01070734:3: Configuration error: FQDN Node (/Test/xbltmpool-member-02.company.com:xbltmpool-member-02.company.com) already exists as Node (/Common/xbltmpool-member-02.company.com)"} PLAY RECAP ********************************************************************************************************************************************************** f5adcfxioc01-apidev.net.pge.com : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 seeing the following error, can someone let me know if there is anything that needs to be changed or is this a known problem. Thanks and Regards, BalajiSolved767Views0likes1CommentAssign an Existing Node to Pool in F5 BIG IP through F5-SDK
Is there any way to assign a node that already exists in the Common partition to a pool that also already exists? For example: Call pool and node pool = bigip.tm.ltm.pools.pool.load(name="mypool", partition='Common') node = bigip.tm.ltm.nodes.node.load(name="mynode", partition='Common') First Option pool.members_s.members.create(name=node.name, partition="Common") pool.update() Second Option pool.members_s.members[0] = node I don't know if the code is exactly correct, thanks in advance415Views0likes2CommentsMonitoring Window server (WMI, services, etc)
I have a Windows server with a home grown process I would like to monitor. The server sometimes has issues that cause the CPU to go to 100% and many services stop responding. The server still responds to pings though, so it doesn't get taken out of the pool. I would like to maybe use the WMI monitor to see if it will work for monitoring the health of the server, and if it doesn't respond to the check it will take it out. I know monioring windows processes are hard or not builtin. The service does use HTTP but the dev guys won't get back to me on what to query. I read this, and was thinking about setting it up. https://support.f5.com/csp/article/K6914 Unfortunately I can't find the .dll file anywhere, WHERE IS THIS FILE!?? Any other recommendations or suggestions?398Views0likes3CommentsIs this achievable? Mark a LTM VIP down if one of the Pool is down.
Hi All, I've an LTM VIP, which has two pools V1, V2. I've an iRule that directs URLs based on context path to respective Pools. Currently, even if one pool goes down (underlying member nodes are down), LTM is still up & serves call to other Pool. Is it possible to mark LTM VIP as down even if one Pool is down? Because GTM will not mark LTM as down unless all pools under LTM is down & I want LTM to be down even if one Pool in LTM goes down. We've the below iRule now when HTTP_REQUEST { switch -glob [string tolower [HTTP::uri]] { "/test/v1/*" { pool BK7-TEST-8080 } "/example/v2/*" { pool BK7-EXAMPLE-8280 } } }299Views0likes2CommentsPool is changed when Ansible playbook is played again
When I add a Pool in my BIG-IP by the mean of Ansible module big_ip, the pool is correctly defined. Fine! But when I play the task again, the pool is signaled as changed (in the Ansible stdout PLAY RECAP ). It shoudn't because nothing was changed. This occurs only when I define the monitor option for the pool in the task: monitor_type: and_list monitors: http Whan I don't define a monitor option then I can play the taks again and the pool in seen as unchanged, as expected. Is it a normal behavior? What could I do to avoid it? I need to see only the real changes in the PLAY RECAP because some handlers are triggered by changes. Thank you for your help 🙂295Views0likes0Comments