Forum Discussion
LTM : Troubleshooting with no source IP
We have a problem.
we have an LTM in AWS and configured a VIP along with the pools. Here We have multiple pools and traffic is sent to the pools via Irule based on URL. Now from Google Cloud with Google Build server less technology they are triggering a JOb which fetches details from the AWS Cloud and builds instances on Google Cloud. As this is a serverless technology on Google cloud we are not sure of the Source IP.
Now the problem is one build is getting success and after that 4 or 5 build is getting failed. again 1 success and rest failing.
GCP confirmed its leaving Google.
Now we are unable to know whether the traffic is coming to LTM or not.
This VIP and pool are already in use by multiple users and working fine and in production.
GCP is alone an issue.
Could anyone help me with the troubleshooting ideas?
If the job endpoint is only consumed by gcp, you can log the source IP anytime you get a request to the endpint URI. Then, after 5-6 jobs you can look for the relevant log entries and see if you find them all in your log files.
when HTTP_REQUEST { if { [HTTP::uri] starts_with "/api/foo/bar" } { log local0. "Potential GCP call from IP [IP::client_addr]" } }
This assumes, L4 connexion is already established, if not this is a bit harder to only log GCP traffic, you can for this matter find a way to leverage the GCP CE IP ranges and log every connexion from IPs whithin these ranges. This still does not guarantee that this is production traffic and not bot generated traffic.
- imabbas_90Altocumulus
Hello Amine thanks for your suggestion.
So if I add this Irule to the VIP just want to double confirm , that this won't hinder any other users correct?
Also, where can I find these logs, under /var/logs/LTM?
Incase if above fails any idea how to set GCP CE IP ranges as source and filter. or any other thoughts.
TIA
- JRahmAdmin
Hi imabbas_90, you can create a data-group to contain your gcp ip ranges, I was able to do that programmatically against that json data with a little python (where gcp_src.json is a file with what you linked above.)
import json with open('gcp_src.json') as f: data = json.load(f) f1 = open('gcp_src_dg', 'w') f1.write('ltm data-group internal gcp_sources {\n') f1.write(' records {\n') for prefix in data.get('prefixes'): if 'ipv4Prefix' in prefix: f1.write(f' {prefix.get("ipv4Prefix")} {{ }}\n') elif 'ipv6Prefix' in prefix: f1.write(f' {prefix.get("ipv6Prefix")} {{ }}\n') f1.write(' }\n') f1.write(' type ip\n') f1.write('}\n') f1.close()
Then in an iRule, you can just log against sources that match that range in the data-group:
when HTTP_REQUEST { if {[class match [IP::client_addr] equals gcp_sources]} { log local0. "Client IP: [IP::client_addr] matches GCP source..." } }
I'd recommend you take a heavily filtered packet capture as well by actively triggering the job when capturing.
imabbas_90 - If your post was solved it would be helpful to the community to select *Accept As Solution*.
Thanks for being part of our community.- imabbas_90Altocumulus
Hello Mate,
Thanks for checking.
We crossed our first battle. like we found the IP with which it is coming to our environment.
But we are proceeding further check using the suggestion given here. So once I get the concrete thing I will update here and we can make that available to all as a resolution.
We were asked to prove like it's leaving our F5 or not. we can see that the traffic is handed over to the Pool from F5 level. It is a cloud F5 so they are using AWS LB URL in pool.
The traffic is getting Natted on the F5. so AWS LB level can't segregate the traffic from Google .If it reaches AWS LB then they will start investigate from their side . so we thought of create SNAT pool and associate to this Ip's which Google uses. sot that we can segregate traffic on the AWS LB. we are on it. keep you posted.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com