Vertica DB Monitoring Failure post upgrade 10.x to 23.x
I need assistance with a Vertica monitor, due to versions have caused the F5 to no longer be able to use a postgresql monitor. -Why does the monitor for 23.3 no longer work? Pool Monitor Type: Postgresql local login: password verified and working for version 10.1 --------------------------------------------------------------------------------------------- Current Vertica Version: 23.3 / Previous Version 10.1 *TMOS 15.1 and 17.1 have the same Pool member status result/symptoms Monitors tested: Postgresql, Oracle, Mysql ( to think outside the box ) Currently configured for the Pool: Postgresql -Send strings tested for all monitor types- Send String: SELECT 1 Send String: SELECT 1 FROM DUAL; Receive: 1 *10.1 = Success / 23.3 Fail ----------------------------- Send String: SELECT 1 Send String: SELECT 1 FROM DUAL; Receive: Nothing *10.1 = Success / 23.3 Fail Both Select 1 and Select from Dual receive the below successfully _________________________________________________________________________________________________ Troubleshooting via F5 CLI foo.bar.f5.com:Active# DB_monitor cmd status debug Telnet Port 5433 = Successful Version of jdbc does not matter between TMOS versions, the monitor still fails RPM version ( TMOS 15.1 ) postgresql-jdbc-42.2.13-.0.0.32.noarch postgresql-9.3.2-0.0.32.i686 postgresql-libs-9.3.2-0.0.32.i686 postgresql-share-9.3.2-0.0.32.i686 RPM version ( 17.1 ) postgresql-server-15.0-1.fc38.0.0.2.x86_64 postgresql-jdbc-42.2.13-0.0.2.noarch postgresql-private-libs-15.0-1.fc38.0.0.2.i686 postgresql-15.0-1.fc38.0.0.2.x86_64 postgresql-private-libs-15.0-1.fc38.0.0.2.x86_6460Views0likes3CommentsAny way to do DNS loadbalancing without BIG-IP DNS module?
Hi, In our environment we have a number of domain controllers which act as DNS servers for everything internally. Now, we have one specific type of client that is only able to be configured with a single IP address for its DNS server and this causes problems when a DNS server is down for maintenance. We run BIG-IP VE v16.1.4 with LTM, but not DNS, provisioned. I'd like to solve thiswithout provisioning the BIG-IP DNS module in this particular instance, by doing this: 1. Creating a new Stateless VS to receive DNS queries on port 53/udp 2. Assign a UDP protocol profile with "datagram" enabled (so it LBs every single packet) to the VS 3. Create a pool of DNS-servers 4. Create an internal DNS record that will be used to check that a DNS server responds with the correct RR. 5. Assign a "DNS" monitor to the pool and configure it to check service status by sending a DNS query for the RR I created the and seeing if the response is correct. However, the "DNS" monitor puts every server in the DOWN state. By using tcpdump on the BIG-IP VE I can see that the BIG-IP doesnot send any DNS query packets from this monitor to the DNS servers in the pool. I see a lot of other DNS queries from the BIG-IP (the servers in question is also the DNS servers for the BIG-IP). SO - should it even be possible to create a normal LTM pool containing DNS serversand having the BIG-IP monitor the service state of each member using the "DNS" monitor?Solved76Views0likes5CommentsProper syntax for using quotes in monitors send/recv?
For http monitors, we generally look at our application's status page. This returns the output from various tests, with both the test name and result surrounded by quotes. It's my understanding that quotes need to be prefaced with a backslash in order for them to be properly processed. I didn't have any problems with this until I tried "load sys config" from TMSH, and realized it's not liking the syntax: (Active)(/Common)(tmos) create ltm monitor http MyMon send 'GET /MyApp/Status\r\n' recv '\"httpStatus\":\"OK\"' (Active)(/Common)(tmos) load sys config Loading configuration... /config/bigip_base.conf /config/bigip_user.conf /config/bigip.conf 01070642:3: Monitor /Common/MyMon parameter contains unescaped " escape with backslash. Unexpected Error: Loading configuration process failed.502Views0likes4CommentsLDAP monitor behaviour
Hi Just wanted to check that my understanding of how an LDAP monitor behaves. Forgive the long background 😉 We had an incident where users couldn't authenticate because an AD Query in our access policy was failing. AD agent: Query: query with '(|(sAMAccountName=bloggsjoe))' failed Our current monitor still had the domain controller as up, so all users attempting to authenticate from that point failed. We forced the domain controller offline so it would send to the next in the pool (priority group), and users were able to authenticate. I am looking to configure an LDAP monitor to attach to the pool of controllers used to authenticate users. It is configured to do an ldap search looking for a particular account. I have mandatory attributes set to true, so if the search fails it should mark the member down. ltm monitor ldap /Common/ldap_dc_monitor { base "OU=Service Accounts,DC=prod,DC=local" chase-referrals yes debug no defaults-from /Common/ldap description "LDAP monitor for domain controllers used for auth" destination *:389 filter sAMAccountName=f5_apm interval 10 mandatory-attributes yes password *********** security tls time-until-up 0 timeout 31 username f5_apm@prod.local } I'm hoping this monitor will mimic the AD query, so if we have an occurrence where the primary domain controller has an issue with the search, it will be marked down and the next in the priority group will take over. If I change the filter to something I know will fail I can see the pool members get marked down. However what I wasn't expecting was it takes the full timeout before it gets marked down. I turned on debug and tailed the monitors log file for the primary controller. I could see the response from the controller come back straight away, but it still waits the full timeout before bringing the member down no attributes were received for filter 'SAMAccountName=blah' Is that expected behaviour? I was expecting the member to be marked down as soon as the above response was received Cheers, Simon401Views0likes1CommentLTM monitor - help
I have a pool with 2 servers 10.0.0.1 and 10.0.0.2 that runs multiple websites, i'm looking to have multiple monitors attached to the pool one for each url. the below send and receive strings. Question is the below correct and what is the traffic flow in other works does it check the below url by sending to each server or will it resolve example1.test.co.uk to its external url and test aginst that GET / HTTP/1.1\r\nHost: example1.test..co.uk\r\nConnection: Close\r\n\r\n HTTP/1.1 200 OK any help would be good362Views0likes1CommentLTM Health Monitors
Hi team. I want to ask a question about health monitors. I have a Web site (www.example.com) behind the Load Balancer. I created a health monitor and I wrote send string : ''GET / HTTP/1.1\r\nHost:www.example.com\r\nConnection: Close\r\n\r\n'' I wrote recieve string : HTTP 1.1 200 OK And application is avaliable. VS is online (green circle) Then I changed receive string to : ''GET / HTTP/1.1\r\nHost:www.f5lab.com\r\nConnection: Close\r\n\r\n. So I replaced host with an unrelated name. And application is available again :) VS is online (green circle) How should we interpret this? Do you know good article or videos about send and recieve string? Thank you..430Views0likes2CommentsAuto-Enable after Receive Disable String
I've been looking at the Receive Disable String on some HTTP monitors, and we can confirm that when the monitor detects the string the pool members do in fact get disabled, but when the receive string is returned instead the member never gets returned to enabled. Is this normal? or is there a way to automatically re-enable the member once it's functioning again? Thanks in advance.437Views0likes5Comments