Forum Discussion
GTM - Virtual server monitor down
Hi!
Noob warning here. I've set up a GTM from scratch:
- Added default gateway 10.0.1.1
- Created, data center
- Added an F5 cluster using bigip default monitor
- Added both nodes using bigip_add (the servers were all green after this step)
- Ran big3d_install. Looked fine
The F5 LB cluster is in my lab environment:
- 10.0.0.11
- 10.0.0.12
My GTM is here:
- 10.0.1.11
To rule out port lockdown I have set the self IP's to allow any.
Since I use SNAT translation on some of my VIP's the Virtual Server discovery does not work (according to some posts I've read?) so I have added a VIP manually using the exact same name and IP as the actual VIP on the LB (leaving translation settings blank). However, it's marked as down.
I have verified that the firewall is opened between the two:
curl --interface EXTERNAL -k https://10.0.0.22
This is site1
It is served from port 80 and site1 as host header
tcpdump does not show any monitoring attempts between the GTM and the LTM's. I have tested bigip, https and tcp as monitor. All of them shows the virtual server as red.
Please let me know what noob mistake I've made? 🙂
/Patrik
4 Replies
- Maneesh_72711
Cirrostratus
The iquery running fine between the two ?
Thank you for your suggestion!
Since the bigip monitor works fine for the servers it should be right? Tested with iqdump just now and it looked good as well.
/Patrik
- FMA
Nimbostratus
Hey Patrik,
Concerning virtual server discovery - f.e. if you have virtual servers which are assigned private IP addresses, and you want GTM to return public IP addresses instead of those private assigned to VIPs when DNS request comes, then you'll have to disable vs discovery, because it won't let you use "translation" feature.
You won't see any "monitor probes" from GTM to your vs, since it is residing on a LTM and all the "metrics" and "health" are exhanged via iquery protocol (TCP 4353).
Just a quick question - did you add a port of that vs you added? Can you show "show gtm server " from your GTM?
- Stanislas_Piro2
Cumulonimbus
Hi patrik,
To summarize:
- gtm private ip : 10.0.1.11
- ltm 1 ip : 10.0.0.11
- ltm 2 ip : 10.0.0.12
- ltm vs : 10.0.0.22
- ltm vs public ip (nat by firewall) : 1.2.3.4
If you want gtm to answer with private ip, gtm vs must be created with the same ip as ltm one
If you want gtm to answer with public ip, you must create the gtm vs with public ip (1.2.3.4) and define translation ip and port with ltm ip ( used to request ltm vs status)
Ltm and gtm does not share vs status with name but with ip / port
I think f5 must change this configuration because lots of deployment are with nat and we can't use vs discovery in such configuration! Maybe they are waiting IPv6 to solve this issue :-)
EDIT (included in comments bellow):
In gtm servers objects, you must have an object per gtm and an object per ltm
Same configuration as vs.. if gtm is behind nat device and need to communicate with another gtm device via internet, you must configure translation.
If you create link objects assigned to LTM object in GTM configuration, it must be defined with IP address of next hop of the appliance. if Link object is down, all related objects are down even if LTM VS status is up.
If GTM / LTM communication use the only internet link of a datacenter, Link configuration is not necessary as if link is down, GTM will not be able to get status... :-)
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com