big-ip gtm
54 TopicsAdding/Removing Metadata to GTM Pool: Will This Impact Traffic?
I haven't seen any evidence of this in practice or in documentation, but I'd feel a lot better with some corroboration from you lovely people in DevCentral. I'd like to execute the following PUT request against a GTM server running iControlREST v12 (for each pool): curl -sk "https://[f5_server]/mgmt/tm/gtm/pool/a/[pool_name]" -H 'Content-Type: application/json' --user "[user]":"[password]" -X PUT -d '{"metadata": [{"name": "AppName","persist": "true","value": "Test_Value1"},{"name": "ServiceName","persist":"false","value":"Test_Value2"},{"name": "AppOwner","persist":"false","value":"Test_Value3"},{"name": "AppSupport","persist":"false","value":"Test_Value4"}]}' So far, so simple, but I'd like to ensure that this doesn't invoke or trigger something dramatic like a config reload or crash the server. No visible cause for panic here, right? N.b. there is no metadata for these pools at present; I'm aware that this is a "write" and not an "update" command, but that's okay. Thanks!262Views0likes0CommentsUsing iControlREST version 11.6 how can I get a gtm pool member's IP address
As I am looking through the documentation it seems that 'address' is one of the fields that is typically returned from the API. When I request the pool member there is no address key at all in the returned JSON. Example of the JSON: { "enabled": true, "fullPath": "/Common/BigIP:www-v6-site", "generation": 2735, "kind": "tm:gtm:pool:members:membersstate", "limitMaxBps": 0, "limitMaxBpsStatus": "disabled", "limitMaxConnections": 0, "limitMaxConnectionsStatus": "disabled", "limitMaxPps": 0, "limitMaxPpsStatus": "disabled", "monitor": "default", "name": "BigIP:www-v6-site", "order": 2, "partition": "Common", "ratio": 1, "selfLink": "https://localhost/mgmt/tm/gtm/pool/~Common~www-site-pool/members/~Common~BigIP:www-v6-site?ver=11.6.0" } I would really like to be able to use the information in the address key but I do not know how to have that included in the returned data. It certainly seems to be included by default in the ltm methods but not in the gtm as you can see above. Thanks for your time378Views0likes2CommentsWhy am I getting DNS queries in my logs
Why am I getting all of these log messages? I have GTM logging set to notice and big3d to error. I have the WIPs "logging" options deselected. But my logs are getting crushed with what looks like every DNS request. Jan 23 12:13:59 localhost info tmm[21782]: 2017-01-23 12:13:59 myf5 qid 5787 from x.x.x.5340281: view none: query: autodiscover.mydomain.com IN AAAA +EDC (x.x.x.253%0) Jan 23 12:13:59 localhost info tmm[21782]: 2017-01-23 12:13:59 myf5 qid 5787 to x.x.x.5340281: [NOERROR qr,aa,rd,cd,do] response: empty Jan 23 12:13:59 localhost info tmm[21782]: 2017-01-23 12:13:59 myf5 qid 56460 from x.x.x.5345934: view none: query: outlook.mydomain.com IN AAAA + (x.x.x.253%0) Jan 23 12:14:00 localhost info tmm[21782]: 2017-01-23 12:14:00 myf5 qid 65490 from x.x.x.5310938: view none: query: autodiscover.mydomain.com IN AAAA +EDC (x.x.x.253%0) Jan 23 12:14:00 localhost info tmm[21782]: 2017-01-23 12:14:00 myf5 qid 65490 to x.x.x.5310938: [NOERROR qr,aa,rd,cd,do] response: empty Jan 23 12:14:01 localhost info tmm[21782]: 2017-01-23 12:14:01 myf5 qid 24548 from x.x.x.5362785: view none: query: outlook.mydomain.com IN AAAA +ED (x.x.x.253%0) Jan 23 12:14:02 localhost info tmm[21782]: 2017-01-23 12:14:02 myf5 qid 49823 from x.x.x.5332067: view none: query: autodiscover.mydomain.com IN A +ED (x.x.x.253%0) Jan 23 12:14:02 localhost info tmm[21782]: 2017-01-23 12:14:02 myf5 qid 49823 to x.x.x.5332067: [NOERROR qr,aa,rd,do] response: autodiscover.mydomain.com. 30 IN A x.x.x.75; Jan 23 12:14:02 localhost info tmm[21782]: 2017-01-23 12:14:01 myf5 qid 58020 from x.x.x.5316488: view none: query: outlook.mydomain.com IN AAAA +EDC (x.x.x.253%0) Jan 23 12:14:03 localhost info tmm[21782]: 2017-01-23 12:14:02 myf5 qid 2801 from x.x.x.5315405: view none: query: autodiscover.mydomain.com IN AAAA +ED (x.x.x.253%0) Jan 23 12:14:03 localhost info tmm[21782]: 2017-01-23 12:14:02 myf5 qid 2801 to x.x.x.5315405: [NOERROR qr,aa,rd,do] response: empty Jan 23 12:14:03 localhost info tmm[21782]: 2017-01-23 12:14:03 myf5 qid 13749 from x.x.x.533680: view none: query: outlook.mydomain.com IN AAAA +ED (x.x.x.253%0) Jan 23 12:14:04 localhost info tmm[21782]: 2017-01-23 12:14:03 myf5 qid 6042 from x.x.x.5342914: view none: query: usonly.fuse.mydomain.com IN AAAA +ED (x.x.x.253%0) Jan 23 12:14:04 localhost info tmm[21782]: 2017-01-23 12:14:03 myf5 qid 6042 to x.x.x.5342914: [NOERROR qr,aa,rd,do] response: empty Jan 23 12:14:04 localhost info tmm[21782]: 2017-01-23 12:14:03 myf5 qid 49728 from x.x.x.5333852: view none: query: usonly.fuse.mydomain.com IN A +ED (x.x.x.253%0) Jan 23 12:14:04 localhost info tmm[21782]: 2017-01-23 12:14:03 myf5 qid 49728 to x.x.x.5333852: [NOERROR qr,aa,rd,do] response: usonly.fuse.mydomain.com. 30 IN A 131.131.249.80; Jan 23 12:14:04 localhost info tmm[21782]: 2017-01-23 12:14:03 myf5 qid 8997 from x.x.x.5363498: view none: query: outlook.mydomain.com IN AAAA + (x.x.x.253%0) Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:04 myf5 qid 53216 from x.x.x.5336281: view none: query: autodiscover.mydomain.com IN AAAA +ED (x.x.x.253%0) Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:04 myf5 qid 53216 to x.x.x.5336281: [NOERROR qr,aa,rd,do] response: empty Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:04 myf5 qid 14530 from x.x.x.5334954: view none: query: autodiscover.mydomain.com IN AAAA +EDC (x.x.x.253%0) Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:04 myf5 qid 14530 to x.x.x.5334954: [NOERROR qr,aa,rd,cd,do] response: empty Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:04 myf5 qid 29131 from x.x.x.5331619: view none: query: autodiscover.mydomain.com IN A +ED (x.x.x.253%0) Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:04 myf5 qid 29131 to x.x.x.5331619: [NOERROR qr,aa,rd,do] response: autodiscover.mydomain.com. 30 IN A x.x.x.75; Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:05 myf5 qid 42089 from x.x.x.5350733: view none: query: outlook.mydomain.com IN A +ED (x.x.x.253%0) Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:05 myf5 qid 42089 to x.x.x.5350733: [NOERROR qr,aa,rd,do] response: outlook.mydomain.com. 30 IN A x.x.x.75; Jan 23 12:14:05 localhost info tmm[21782]: 2017-01-23 12:14:05 myf5 qid 41559 from x.x.x.5360000: view none: query: outlook.mydomain.com IN AAAA +EDC (x.x.x.253%0)434Views0likes1CommentUnderstanding the Significance of Nested Stats (GTM)
(EDIT: I pasted some scrubbed output in a "comment" below. I hope it makes my question clearer) We recently updated our GTM to version 12.1.4 and I'm seeing some unexpected results in iControl REST's output. If I query a particular pool member that I know has been disabled, I get two rounds of status.availbilityState, status.enabledState and status.statusReason back, and they don't match: "status.availabilityState": { "description": "available" }, "status.enabledState": { "description": "enabled" }, "status.statusReason": { "description": "Available" } ` Then, further nested is another JSON response, and it's reporting accurate state: ` "status.availabilityState": { "description": "available" }, "status.enabledState": { "description": "disabled" }, "status.statusReason": { "description": "Available: disabled directly" } ` What's the difference between these two, and why aren't they the same? Even something as simple as this cURL command returns the wrong value: `[user@host ~]$ curl -k --user f5User:${RD_SECUREOPTION_F5_PASSWORD} -X GET -H "Accept: application/json" https://${RD_OPTION_GTM_SERVER}/mgmt/tm/gtm/pool/a/Pool-myservice-sts.gslb-int.mydomain.com/members/~Common~f5-ALB-PAIR:IP_ADDR-443/stats?\$select=status.enabledState | jq '.' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 287 100 287 0 0 1131 0 --:--:-- --:--:-- --:--:-- 1134 { "entries": { "https://localhost/mgmt/tm/gtm/pool/a/Pool-myservice-sts.gslb-int.mydomain.com/members/~Common~f5-ALB-PAIR:IP_ADDR-443/~Common~Pool-myservice-sts.gslb-int.mydomain.com:A/stats": { "nestedStats": { "entries": { "status.enabledState": { "description": "enabled" } } } } } } Thanks!448Views0likes2CommentsLTM/GTM Combo w/ multiple partitions - Datacenters creation outside Common
I have two F5 BIG-IP Virtual Editions each with LTM and GTM modules. We've created a secondary partition on each to allow for future expansion. All of the LTM config is deployed outside of the common partition. I've managed to make my way through getting the SSL certs shared between both devices with the bigip_add command and have verified with iqdump. The next step was to add the Datacenters to the GTM configuration. I have the secondary (non-common) partition selected, however, when I create the Datacenter objects they are always created in the "Common" partition. Beings I wasnt able to create the Datacenters in the new partition in any obvious way I ran with the assumption that this was expected behavior. Now when I move on to create the Server objects for the GTM/LTM devices I am able to do so successfully and they pull back and show all VS online. Moving on to creating pools is where the problems start. When I attempt to create a Pool I get this "An error has occurred while trying to process your request." I should note that currently each device is configured with a single Self IP and the GTM listener is attached to that IP address. Also, the following is found in the GTM log. No additional log entries are generated when I attempt to create a Pool. Oct 5 03:16:03 brsl011a alert gtmd[4530]: 011ae0f2:1: Monitor instance /Common/bigip 130.24.107.45:80 UNKNOWN_MONITOR_STATE --> UP from 130.24.107.41 (UP) Oct 5 03:16:03 brsl011a alert gtmd[4530]: 011a6005:1: SNMP_TRAP: VS /PP2-Main-Exch/cgt-pp2-exch-preprod_app/cgt-pp2-exch-preprod_ad_http (ip:port=130.24.107.45:80) (Server /Common/ns2.wip-pp.contoso.com) state change blue --> green Oct 5 03:16:04 brsl011a alert gtmd[4530]: 011ae0f2:1: Monitor instance /Common/bigip 130.24.107.50:135 UNKNOWN_MONITOR_STATE --> UP from 130.24.107.41 (UP) Oct 5 03:16:04 brsl011a alert gtmd[4530]: 011a6005:1: SNMP_TRAP: VS /PP2-Main-Exch/cgt-pp2-exch-preprod_app/cgt-pp2-exch-preprod_rpc (ip:port=130.24.107.50:135) (Server /Common/ns2.wip-pp.contoso.com) state change blue --> green Oct 5 03:16:07 brsl011a alert gtmd[4530]: 011ae0f2:1: Monitor instance /Common/bigip 130.24.107.42:80 UNKNOWN_MONITOR_STATE --> UP from 130.24.107.41 (UP) Oct 5 03:16:07 brsl011a alert gtmd[4530]: 011a6005:1: SNMP_TRAP: VS /PP2-Main-Exch/cgt-pp2-exch-preprod_app/cgt-pp2-exch-preprod_owa_http (ip:port=130.24.107.42:80) (Server /Common/ns2.wip-pp.contoso.com) state change blue --> green Oct 5 03:16:09 brsl011a alert gtmd[4530]: 011ae0f2:1: Monitor instance /Common/bigip 130.24.107.44:443 UNKNOWN_MONITOR_STATE --> UP from 130.24.107.41 (UP) Oct 5 03:16:09 brsl011a alert gtmd[4530]: 011a6005:1: SNMP_TRAP: VS /PP2-Main-Exch/cgt-pp2-exch-preprod_app/cgt-pp2-exch-preprod_oa_https (ip:port=130.24.107.44:443) (Server /Common/ns2.wip-pp.contoso.com) state change blue --> green Oct 5 03:16:10 brsl011a alert gtmd[4530]: 011ae0f2:1: Monitor instance /Common/bigip 130.24.107.43:443 UNKNOWN_MONITOR_STATE --> UP from 130.24.107.41 (UP) Oct 5 03:16:10 brsl011a alert gtmd[4530]: 011a6005:1: SNMP_TRAP: VS /PP2-Main-Exch/cgt-pp2-exch-preprod_app/cgt-pp2-exch-preprod_as_https (ip:port=130.24.107.43:443) (Server /Common/ns2.wip-pp.contoso.com) state change blue --> green I have a couple of questions. 1) are the Datacenters being created inside the common partition instead of the secondary partition an expected result or should I be able to create Datacenters and have them show in my secondary partition? 2) Knowing the above is currently true (datacenters in common partition) when I go to create the Pools would this be a cause for the error? Thanks to anyone who actually read this lengthy post and to anyone who can help out! Cheers902Views0likes7CommentsSimple iRule For DNS Intercept on Big IP DNS
Within our network we are using Big IP DNS for all of our DHCP clients, so all DNS requests come to the BIG IP DNS first for resolution. If there as no Wide IP setup for the DNS request, then the request is just forwarded to our Windows DNS servers for resolution. This all works fine; however due to various mergers we now have a situation whereby we have a FQDN is handled on the Windows DNS servers by use of conditional forwarders for that FQDN domain. Unfortunately, the DNS servers which are in another country are resolving the IP Address of this specific IP address (which is a NAT address) which we cannot route to from our country. The specific server which provides the services for this FQDN is actually based in our country and we can route to the 'real' IP address (but not the NAT), but for operational reasons, we need to use the conditional forwarders and the DNS resolution from the overseas DNS servers. So, all I want to do is put a very simple iRule on the F5 Big IP DNS which sits behind the FQDN, which I will present as a Wide IP, so that if any of the DHCP clients, which use the F5 Big IP DNS for DNS resolution does a lookup for that specific FQDN, then the iRule will return the 'real' IP address of the server and the DNS request will have been intercepted before it reaches the Windows DNS servers. I'm sure that this is something REALLY simple to achieve, but not being an iRule expert, I just cannot seem to get the syntax correct to make this happen. I'm sure this is probably a 3 line iRule, but I'm failing to find a simple example anywhere! All it needs to do is: Create a Wide IP of "ABC.DOMAIN.PRIVATE" and apply the simple iRule of: if DNS lookup = "ABC.DOMAIN.PRIVATE" then return DNS response "123.123.123.123" else process requests as normal Surely this is possible? Can anyone help with an example iRule? Any help appreciated. Dom.2.3KViews0likes5CommentsF5 Big-IP Link Controller resolve fqdn to DNS request
Hi, I have a environment licensed with Link Controller and not with GTM, I need to configure the LC with WideIP resolving DNS request, the zone configuration remains on external DNS server, when the DNS Server receive a request to FQDN that configured on the LC, this FQDN is registred on the DNS Server with NS to the two Listerners IPs of the Link Controller, and the LC resolve the DNS request for this FQDN. Is it possible configure this with Link Controller? Thank you247Views0likes0CommentsChanging member priority in GTM using i-rule
Hi, We have a requirement of changing the pool member priority in GTM for global availability based on the status of pool member of another pool. How can we accomplish it? We can check the active_members status of the pool but what command can be used to change the priority of pool in question? We are currently running 11.6.0 version593Views0likes5CommentsGTM (DNS) Monitoring of LTM Virtual Servers with LTM Virtual Server IPs are NAT via Firewall
I'd like to share my experience of a specific scenario in deploying GTM and LTM and open it up to the community if we could find a better way to do this than what I've come up with. My company recently purchased some F5 LTMs and GTMs and there were a couple of design requirements / constraints that we had to follow. Scenario & Network Design Requirements: All Self-IPs and Virtual Servers on the F5 LTM must use private IP addresses and must not use public IP addresses For applications that are served via F5 LTM Virtual Servers which needs to be accessed over the internet, the public IP will be NAT-ed from an internet facing firewall to the private IP that is configured on the F5 LTM virtual server GTM will need to be able to monitor the status of Virtual Servers on the LTM using iQuery but when GTM responds to public DNS queries, GTM must return the public IP. As you can see, we already have a problem here because the Virtual Server Discovery will populate the LTM Server Object on the GTM with all the Virtual Servers on the LTM but they're all configured with private IPs. You cannot link these virtual servers to Wide IP Pools and onwards to Wide IPs because then GTM will return private IPs when it receives DNS queries. The solution that I came up with was to do this: Establish iQuery between the LTM and GTM and also enable Virtual Server Discovery Manually create Server objects of product Generic Host for each Virtual Server that needs to be reached over the internet, use the public IP that has been allocated by the Network Team which will be NAT-ed at the Firewall (eg. 1.1.1.1), do not apply any Health monitors, do not fill in the "Translation" field Manually create Virtual Server objects under the Server object created in 2 above, use the public IP that has been allocated by the Network Team which will be NAT-ed at the Firewall (eg. 1.1.1.1), switch the "Configuration" drop down menu to "Advanced", apply a simple gateway_icmp monitor, in the Dependency List - search for the actual virtual server which will accept the traffic (eg. 10.1.1.1), this virtual server would have been discovered earlier in 1 by Virtual Server Discovery. This means the diagram now becomes like this: When we do 3 above, what happens is that the GTM will ping the public NAT-ed IP of the Virtual Server (1.1.1.1), the firewall will NAT the IP to the private IP (10.1.1.1), the ping will reach the LTM Virtual Server and if the ping is successful, the object will be green on the GTM. This alone is not enough however as on the LTM, a "Standard" type virtual server will still respond to pings even if all the pool members are unavailable and the virtual server is also unavailable (this is where I think Virtual Server status as updated via iQuery is superior to a normal monitor), so to solve this problem I used the Dependency List option below the Health Monitor section and I chose the corresponding Virtual Server that was discovered by the Virtual Server Discovery (VS1 10.1.1.1). This way, should all the pool members become unavailable on the LTM, the LTM will update the status of the virtual server to the GTM via iQuery and the GTM will make the 1.1.1.1 Virtual Server object unavailable even if the pings are still successful. So my question to the community is: Given the restrictions above, is this the correct way to make GTM give out Public IPs when the Virtual Servers on the LTMs are configured with private IPs? There was another question on this same topic from 2016 (linked below), but it sort of died out without a resolution: https://devcentral.f5.com/questions/gtm-to-give-away-public-ip-address-while-monitoring-the-private-ltm-vs-49835 Update 15 Mar 2019: I learnt that when adding an LTM that's separated from the GTM via a Firewall that does NAT translation, the GTM will not perform Virutal Server Discovery: https://support.f5.com/csp/article/K91381.2KViews0likes2CommentsSearching and Filtering Objects by Metadata
I've been adding some metadata to the WIPs in our f5 GTMs, eg "metadata": [ { "name": "xxAppName", "persist": "true", "value": "Web Service" }, { "name": "xxAppOwner", "persist": "true", "value": "myGroup2" }, { "name": "xxAppSupport", "persist": "true", "value": "support_email@mydomain.com" }, { "name": "xxServiceName", "persist": "true", "value": "myService2" }, { "name": "xxWIPStatus", "persist": "true", "value": "active" } ], This is useful when I'm using Splunk or jq to parse the results, but what if I want to limit the scope of the returned data in the original request, analogous to a filter like "where xxAppSupport == support_email@mydomain.com?" Do any versions of iControl REST (and by extension, the SDK) support this? It'll be really useful when this effort extends to our LTMs. Thanks.535Views0likes2Comments