GTM return LDNS IP to client
Problem this snippet solves: We do a lot of our load balancing based on topology rules, so it's often very useful to know where the DNS request is actually coming from rather than just the client's IP and the DNS servers they have configured. Especially if they're behind an ADSL router doing NAT or some other similar set up. This rule simply returns the IP address of the LDNS that eventually made the query to the GTM device in the response to a lookup for the WideIP using the rule, as well as logging the response and perceived location. Code : rule "DNS_debug" partition "Common" { when DNS_REQUEST { host [IP::client_addr] log local0.err "Debug address : [IP::client_addr] [whereis [IP::client_addr]]" } }721Views1like1CommentKnowledge sharing: Velos and rSeries (F5OS) basic troubleshooting, logs and commands
This another part of my Knowledge sharing articles, where I will take a deeper look into Velos and rSeries investigation of issues, logs and command. 1. Velos HA controller and blade issues. As the Velos system is the one with two controllers in active/standby mode only with Velos it could be needed to check if there is an issue with the controller's HA. As the controller's HA order can be different for the system and the different partitions to check the HA for the system use the /var/log_controller/cc-confd file or for a partition HA issue look at the partition velos log at /var/F5/partition<ID>/log/velos.log . Also you can enable HA debug for the controllers with " system dbvars config debug confd ha-state-machine true ". Overview of HA: https://support.f5.com/csp/article/K19204400 Controller HA: https://support.f5.com/csp/article/K21130014 Partition HA: https://support.f5.com/csp/article/K58515297 2. Entering into F5OS objects. The rSeries and Velos tenants are like vCMP quests with VIPRION and sometimes if there are access issues with them it could be needed to open their console. For this the "virtctl" command can be used and as an example " /usr/share/omd/kubevirt/virtctl console<tenant_name>-<tenant_instance_ID> ". Also as velos uses blades and partitions it could be needed to ssh to a blade with " ssh slot<number> " or to enter a partition with " docker exec -it partition<ID>_cli su admin " as sometimes for example to see the GUI logs entering the GUI container for the partition could be needed but F5 support will for this in most cases and maybe this will be the way to enter the BIG-IP NEXT CLI. Overview of VELOS system architecture: https://support.f5.com/csp/article/K73364432 Overview of rSeries system architecture: https://support.f5.com/csp/article/K49918625 rSeries tanant access: https://support.f5.com/csp/article/K33373310 Velos blade and tenant access: https://support.f5.com/csp/article/K65442484 Velos partition access: https://support.f5.com/csp/article/K11206563 3. Usefull commands and logs. For Velos/rSeries as this is a system with a cluster the "show cluster" command is usefull to see any issues (look fo "cluster is NOT ready."). Also the velos.log for the controller and partitions is a great place to start and debug level can be enabled for it under " SYSTEM SETTINGS Log Settings " as this is also the place for rSeries logging to be set to debug. Also the /var/log/openshift.log is good be checked with velos if there are cluster issues or or ks3.log in rSeries. Also the confd logs are like mcpd logs, so they are really usefull for Velos or rSeries. Other nice commands are docker ps, oc get pod --all-namespaces -o wide, kubectl get pod --all-namespaces -o wide but the support will ask for them in most cases. Velos cluster status: https://support.f5.com/csp/article/K27427444 Velos debug: https://support.f5.com/csp/article/K51486849 Velos openshift example issue: https://support.f5.com/csp/article/K01030619 Monitoring Velos: https://clouddocs.f5.com/training/community/velos-training/html/monitoring_velos.html Monitoring rSeries: https://clouddocs.f5.com/training/community/rseries-training/html/monitoring_rseries.html 4. Velos and rSeries tcpdumps packet captures, file utility and qkview files. For Velos qkviews ca be created for controller or partition as they are seperate qkviews. Tcpdumps for client traffic are done a tcpdump utility from the F5OS (su - admin) and a tcpdump in the Linux kernel is just for the managment ip addresses of the appliance , controller (floating or local) , partition or tenant. The file utility allows for file transfers to remote servers or even downloading any log from the Velos/rSeries to your computer as this was not possible before with iSeries or Viprion. Also the file utility starts outbound session to the remote servers so this an extra security as no inbound sessions need to be allowed on the firewall/web proxy and it can be even triggered by API call and I may make a codeshare article for this. Velos tcpdump utility: https://support.f5.com/csp/article/K12313135 rSeries tcpdump utility: https://support.f5.com/csp/article/K80685750 Qkview Velos: https://support.f5.com/csp/article/K02521182 Qkview Velos CLI location: https://support.f5.com/csp/article/K79603072 Qkview rSeries: https://support.f5.com/csp/article/K04756153 SCP: https://support.f5.com/csp/article/K34776373 5. A final fast check could be to use ''kubectl get pods -o wide--all-namespaces'' (with Velos also ''oc get pods -o wide --all-namespaces'' should also work) to see that all pods are ok and running. Also ''docker ps'' or '' docker ps --format 'table {{.Names}}\t{{.RunningFor}}\t{{.Status}}' '' are usefull to see a container that could be going down and up and this can be correlated with issues seen with "show cluster" command. 6. The new F5OS has much better hardware diagnostics than the old devices, so no more the need to do EUD tests as all system hardware components and their health can be viewed from the GUI or CLI and also this is shown in F5 ihealth! https://techdocs.f5.com/en-us/velos-1-5-0/velos-systems-administration-configuration/title-system-settings.html Edit: For Velos and rSeries always keep the software up to date as for example I will give with the Velos 1.5.1 the cluster rebuild because of the openshift ssl cert being 1 year is much simpler or the F5 rSeries and the Cisco Nexus issues or the corrupt Qkview generation when the GUI not the CLI is used (the velos cluster rebuild with touch /var/omd/CLUSTER_REINSTALL can solve many issues but it will cause some timeout): http://cdn.f5.com/product/bugtracker/ID1135853.html https://my.f5.com/manage/s/article/K000092905 https://support.f5.com/csp/article/K796030722.1KViews2likes2CommentsLogstash pipeline tester
Code is community submitted, community supported, and recognized as โUse At Your Own Riskโ. Short Description A tool that makes developing logstash pipelines much much easier. Problem solved by this Code Snippet Oh. The problem... Have you ever tried to write a logstash pipeline? Did you suffer hair loss and splitting migraines? So did I. Presenting, logstash pipeline tester which givesyou a web interface where you can paste raw logs, send them to the included logstash instance and see the result directly in the interface. The included logstash instance is also configured to automatically reload once it detects a config change. How to use this Code Snippet TLDR; Don't do this, read the manual or checkout the video below Still here? Ok then! ๐ Install docker Clone the repo Run these commands in the repo root folder:sudo docker-compose build # Skip sudo if running Windows sudo docker compose upโ # Skip sudo if running WindowsGo to http://localhost:8080on your PC/Mac Pick a pipeline and send data Edit the pipeline Send data Rince, repeat Version info v1.0.0 Docker containers no longer runs as root Vulnerability fix:https://github.com/epacke/logstash-pipeline-tester/releases/tag/v1.0.0 Video on how to get started: https://youtu.be/Q3IQeXWoqLQ Please note that I accidentally started the interface on port 3000 in the video while the shipped version uses port 8080. It took me roughly 5 hours and more retakes than I can count to make this video so that mistake will be preserved for the internet to laugh at. ๐ The manual: https://loadbalancing.se/2020/03/11/logstash-testing-tool/ Code Snippet Meta Information Version: Check GitHub Coding Language: NodeJS, Typescript + React Full Code Snippet https://github.com/epacke/logstash-pipeline-tester1.9KViews3likes11CommentsBIG-IP Report
Problem this snippet solves: Overview This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires auditor (read-only) permissions on your devices. Demo/Preview Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards (maybe even 11.6). Downloads (two latest versions): https://loadbalancing.se/downloads/bigipreport-v5.7.0.zip https://loadbalancing.se/downloads/bigipreport-v5.6.5.zip Documentation, installation instructions and troubleshooting:https://loadbalancing.se/bigipreport-rest/ Docker support https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ Kubernetes support https://loadbalancing.se/2021/04/16/bigipreport-on-kubernetes/ BIG-IP Report (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5) BIG-IP Report (only download this if you have v10 devices): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and Installation Instructions https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Join us on Discord: https://discord.gg/7JJvPMYahA Code : BigIP Report Tested this on version: 12, 13, 14, 15, 1612KViews20likes87CommentsF5 XC reviewing API requests which the GUI sends and a backup of the config with Python/Ansible
Short Description The F5 XC Distributed Cloud GUI in the background sends API requests with JSON body to the system and those requests can be easily reviewed. Problem solved by this Code Snippet If someone wonders how to do some tasks that the XC GUI does the same way but with automation through the API and JSON then this article will help them. Also at the end I have shown how to retrive XC json data with API. How to use this Code Snippet Reviewing the API requests that are generated by the XC GUI. Full Code Snippet There are 3 ways to review the API requests that the GUI generates. On each XC element like for example the load balancer you can click on the JSON and see the JSON code. The JSON code can even be directly edited from the GUI Dashboard! The API documentation can be reviewed directly from the XC GUI. The final option is just to use the browser developer tools and to see what API requests are send by the F5 XC. This feature is now present on most F5 new products like F5OS(Velos/rSeries) and F5 NEXT๐ The XC JSON created objects from the API are a form of a backup configuration. Even if the objects were created from the GUI then API GET requests can be used to retrive their JSON data and this can be saved to a backup file in the form of snapshot. I have used Python with requests library and the url and API key are added as user input arguments. The script can be used to get information like the XC LB or service policies. As example "/api/config/namespaces/default/service_policys". The script will first call an API endpoint to get for example all the load balancer or service policy names and then it will use the names get the config for each individual service policy or load balancer, using a for loop. There is time.sleep(1) to add 1 second slowness between each api request. The code can have the full url like https://{tenant_name}.console.ves.volterra.io/api/config/namespaces/{namespace}service_policysand the api token be added during script execution or the arguments can be input at the start of script execution by commenting outurl = sys.argv[1] and api_token = sys.argv[2] and executing the script like python3 service_policy.py {argument1} {argument2}. The api token by default is hidden using the getpass library for extra security. See Github for the code: Nikoolayy1/xc_api_script: XC API script to retrieve basic json config (github.com) In some cases using Terraform for XC will be best as XC has strong Terraform support as seen at the link below. Ansible can also be used but XC does not have many developed Ansible modules, so in many cases the Ansible URI module will need to be used and the Ansible URI module (ansible.builtin.uri module โ Interacts with webservices โ Ansible Documentation) in the backgroung is just the python requests or http.client module, as Ansible is python in the background, so better use python directly in that case. XC Terraform: Terraform | F5 Distributed Cloud Tech Docs F5 Distributed Cloud WAAP deployment with Terrafor... - DevCentral Using Terraform and F5ยฎ Distributed Cloud Mesh to ... - DevCentral Example Ansible code (even if I said python is better in this case ๐) xc_api_script/xc_ansible at main ยท Nikoolayy1/xc_api_script (github.com)643Views0likes2CommentsSession Table Export
Problem this snippet solves: This sample goes along with the Tech Tip titled Session Table Exporting With iRules . It creates a mechanism for you to export the data from your session tables for archiving or external reporting. NOTE: This functionality is included in the Session Table Control iRule and is partially rendering here so it has been removed.686Views0likes3CommentsSession Table Control
Problem this snippet solves: This sample goes along with the Tech Tip titled Session Table Control With iRules . It creates an iRules-based HTML application to allow you to view, edit, delete, import, and export your session subtable data. How to use this snippet: Apply to a virtual server with session table entries and you can import/export/edit/delete entries. Code (GitHub gist)1.3KViews0likes5CommentsiCall Script that only runs on Active member
Problem this snippet solves: I had a request to run an iCall script only on the active member in a pair. How to use this snippet: This won't work if you're using active/active via traffic-groups. Code : # Only execute if local BIG-IP is active in failover if {[exec cat /var/prompt/ps1] == "Active"} { tmsh::log "I LIKE SOUP!" } Tested this on version: 12.1705Views0likes2CommentsCreate an internal HTTP Load-Balancer on Volterra with Terraform
Problem this snippet solves: How to create an internal HTTP Load-Balancer with VoltMesh where the Origin is reachable through a Volterra node. Two steps are needed: Creation of the Origin (1-origin.tf file) Creation of the Load-Balancer (2-http-lb.tf file) How to use this snippet: Pre-Requirements: Have a Volterra API Certificate. Please see this page for the API Certificate generation: https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } In the directory where your terraform files are, run: terraform init Then: terraform apply Code : //========================================================================== //Definition of the Origin, 1-origin.tf //Start of the TF file resource "volterra_origin_pool" "sample-http-origin-pool" { name = "sample-http-origin-pool" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" origin_servers { private_ip { ip = "10.17.20.13" //From which interface of the node onsite the IP of the service is reachable. Value are inside_network / outside_network or both. outside_network = true //Site definition site_locator { site { name = "name-of-the-site" namespace = "system" tenant = "name-of-the-tenant" } } } labels = { } } no_tls = true port = "80" endpoint_selection = "LOCALPREFERED" loadbalancer_algorithm = "LB_OVERRIDE" } //End of the file //========================================================================== //========================================================================== //Definition of the Load-Balancer, 2-http-lb.tf //Start of the TF file resource "volterra_http_loadbalancer" "sample-http-lb" { depends_on = [volterra_origin_pool.sample-http-origin-pool] //Mandatory "Metadata" name = "sample-http-lb" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" //End of mandatory "Metadata" //Mandatory "Basic configuration" domains = ["mydomain.internal"] http { dns_volterra_managed = false } //End of mandatory "Basic configuration" //Optional "Default Origin server" default_route_pools { pool { name = "sample-http-origin-pool" namespace = "mynamespace" } weight = 1 } //End of optional "Default Origin server" //Mandatory "VIP configuration" advertise_on_public_default_vip = true //End of mandatory "VIP configuration" //Mandatory "Security configuration" no_service_policies = true no_challenge = true disable_rate_limit = true disable_waf = true //End of mandatory "Security configuration" //Mandatory "Load Balancing Control" source_ip_stickiness = true //End of mandatory "Load Balancing Control" } //End of the file //========================================================================== Tested this on version: No Version Found702Views0likes1Comment