DevOps
13 TopicsiRules Editor & Declarative Development with Visual Studio Code
The windows iRule Editor has had a very long life. But...it hasn't been updated in years and really should be sunsetted in your environment. There have been other attempts along the way, from a personal project with a Mac desktop app written in python and Qt that never made it past me, an Eclipse plugin several years back that gained a little traction, but the iRule Editor Joe Pruitt created lived on through all of that. However, there are a couple fantastic options now in the Visual Studio Code marketplace that combine to make for a great iRules development environment and also include the ability to pursue the automation toolchain development as well. Here are the tools you'll need: Visual Studio Code F5 Networks iRules (for iRules command completion and syntax highlighting) The F5 Extension (for session management and soooo much more) ACC Chariot (for converting config from UCS upload to AS3) John Wagnon and I had Ben Gordon on our DevCentral Connects live stream a couple times to highlight the functionality, which as mentioned goes far beyond just iRules.16KViews6likes1Comment02 - Visualization of F5 BIG-IP metrics on Grafana using Prometheus and Telemetry Streaming service
Configuration using CLI of F5 BIG-IP device Following steps for the configuration of telemetry streaming consumer target using CLI of F5 BIG-IP device are discussed below: Once you have accessed your F5 BIG-IP device CLI terminal then access either your default admin credentials or the new user you’ve recently created on the above section. Then execute the following commands on the terminal: On the username and password section, you either enter your default admin credentials or the new user you’ve recently created has the administrator privilege. curl -u username:password -k https://localhost/mgmt/shared/telemetry/declare Note: -k, --insecure to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k, --insecure is used. ChangChange into tmp directory and create a file called ts-config.json and I am using vi editor for it. cd /tmp vi ts-config.json Paste the Telemetry Streaming declaration and then save the file and exit the vi editor. { "class": "Telemetry", "My_Poller": { "class": "Telemetry_System_Poller", "interval": 0 }, "My_System": { "class": "Telemetry_System", "enable": "true", "systemPoller": [ "My_Poller" ] }, "metrics": { "class": "Telemetry_Pull_Consumer", "type": "Prometheus", "systemPoller": "My_Poller" } } Then execute the following command on the terminal on thesame directory /tmp and change the username and password section with your F5 BIG-IP device credentialshaving the administrator privilege. curl -X POST -u username:password -khttps://localhost/mgmt/shared/telemetry/declare-d @ts-config.json -H “content-type:application/json” To verify the available metrics curl -u username:password -k https://localhost/mgmt/shared/telemetry/pullconsumer/metrics Section III: Configuration of Prometheus Once the telemetry streaming service has been successfully configured and the metrics are available on the path. We need to configure Prometheus in order to scrape the metrics data on the predefined path. The following are the steps to configure the Prometheus: Note: On this user-guide demonstration, both Grafana and Prometheus are installed on the same host with different service ports as mentioned earlier. CentOS 7 is used as the OS for this host machine and you may have different syntax to view the following status check. First, check the status of the Prometheus sudo systemctl status prometheus.service View the current working directory and change into /etc/prometheus pwd cd /etc/prometheus ls -al global: scrape_interval: 10s scrape_configs: - job_name: 'TelemetryStreaming' scrape_timeout: 30s scrape_interval: 30s scheme: https tls_config: insecure_skip_verify: true metrics_path: '/mgmt/shared/telemetry/pullconsumer/metrics' basic_auth: username: 'F5-BIG-IP-username' password: 'F5-BIG-IP-password' static_configs: - targets: ['BIGIP-managementIP:443'] Then restart the Prometheus service and check the status of the Prometheus service. sudo systemctl restart prometheus.service sudo systemctl status prometheus.service Note: If the configuration is correct, then the Prometheus service will be enabled otherwise, the status of the Prometheus service will be disabled. To further verify whether instances has been discovered on the Prometheus: -Go tohttp://prometheus-ip:service/port - Click on the Status option and select the Target option Section IV: Configuration on Grafana using Prometheus as a data source In this section, we need to connect Prometheus as a data source on Grafana Once the data source has been successfully configured on Grafna then Create a new dashboard and select Prometheus as the data source then select the relevant metrics and change the refresh interval as required. Save and apply the panel. Then,Save the dashboard and view the metrics on the Grafana dashboard. The possible issue that can arise during the configuration If you use the default TS declare from the official telemetry streaming document website then you may fail to view the available metrics on the mentioned link: https://<f5-management-ip>/mgmt/shared/telemetry/pullconsumer/metrics3.6KViews3likes0CommentsKnowledge sharing: Velos and rSeries (F5OS) basic troubleshooting, logs and commands
This another part of my Knowledge sharing articles, where I will take a deeper look into Velos and rSeries investigation of issues, logs and command. 1. Velos HA controller and blade issues. As the Velos system is the one with two controllers in active/standby mode only with Velos it could be needed to check if there is an issue with the controller's HA. As the controller's HA order can be different for the system and the different partitions to check the HA for the system use the /var/log_controller/cc-confd file or for a partition HA issue look at the partition velos log at /var/F5/partition<ID>/log/velos.log . Also you can enable HA debug for the controllers with " system dbvars config debug confd ha-state-machine true ". Overview of HA: https://support.f5.com/csp/article/K19204400 Controller HA: https://support.f5.com/csp/article/K21130014 Partition HA: https://support.f5.com/csp/article/K58515297 List of Velos/rSeries services: Overview of F5 VELOS chassis controller services Overview of F5 VELOS partition services Overview of F5 rSeries system services 2. Entering into F5OS objects. The rSeries and Velos tenants are like vCMP quests with VIPRION and sometimes if there are access issues with them it could be needed to open their console. For this the "virtctl" command can be used and as an example " /usr/share/omd/kubevirt/virtctl console<tenant_name>-<tenant_instance_ID> ". Also as velos uses blades and partitions it could be needed to ssh to a blade with " ssh slot<number> " or to enter a partition with " docker exec -it partition<ID>_cli su admin " as sometimes for example to see the GUI logs entering the GUI container for the partition could be needed but F5 support will for this in most cases and maybe this will be the way to enter the BIG-IP NEXT CLI. Overview of VELOS system architecture: https://support.f5.com/csp/article/K73364432 Overview of rSeries system architecture: https://support.f5.com/csp/article/K49918625 rSeries tanant access: https://support.f5.com/csp/article/K33373310 Velos blade and tenant access: https://support.f5.com/csp/article/K65442484 Velos partition access: https://support.f5.com/csp/article/K11206563 3. Usefull commands and logs. For Velos/rSeries as this is a system with a cluster the "show cluster" command is usefull to see any issues (look fo "cluster is NOT ready."). Also the velos.log for the controller and partitions is a great place to start and debug level can be enabled for it under " SYSTEM SETTINGS Log Settings " as this is also the place for rSeries logging to be set to debug. Also the /var/log/openshift.log is good be checked with velos if there are cluster issues or or ks3.log in rSeries. Also the confd logs are like mcpd logs, so they are really usefull for Velos or rSeries. Other nice commands are docker ps, oc get pod --all-namespaces -o wide, kubectl get pod --all-namespaces -o wide but the support will ask for them in most cases. Velos cluster status: https://support.f5.com/csp/article/K27427444 Velos debug: https://support.f5.com/csp/article/K51486849 Velos openshift example issue: https://support.f5.com/csp/article/K01030619 Monitoring Velos: https://clouddocs.f5.com/training/community/velos-training/html/monitoring_velos.html Monitoring rSeries: https://clouddocs.f5.com/training/community/rseries-training/html/monitoring_rseries.html 4. Velos and rSeries tcpdumps packet captures, file utility and qkview files. For Velos qkviews ca be created for controller or partition as they are seperate qkviews. Tcpdumps for client traffic are done a tcpdump utility from the F5OS (su - admin) and a tcpdump in the Linux kernel is just for the managment ip addresses of the appliance , controller (floating or local) , partition or tenant. The file utility allows for file transfers to remote servers or even downloading any log from the Velos/rSeries to your computer as this was not possible before with iSeries or Viprion. Also the file utility starts outbound session to the remote servers so this an extra security as no inbound sessions need to be allowed on the firewall/web proxy and it can be even triggered by API call and I may make a codeshare article for this. Velos tcpdump utility: https://support.f5.com/csp/article/K12313135 rSeries tcpdump utility: https://support.f5.com/csp/article/K80685750 Qkview Velos: https://support.f5.com/csp/article/K02521182 Qkview Velos CLI location: https://support.f5.com/csp/article/K79603072 Qkview rSeries: https://support.f5.com/csp/article/K04756153 SCP: https://support.f5.com/csp/article/K34776373 For rSeries 2000/4000 tcpdump is different as SR-IOV not FPGA (rSeries Networking (f5.com)) is used to attach interfaces directly to the tenant VM: Article Detail (f5.com) 5. A final fast check could be to use ''kubectl get pods -o wide--all-namespaces'' (with Velos also ''oc get pods -o wide --all-namespaces'' should also work) to see that all pods are ok and running. Also ''docker ps'' or '' docker ps --format 'table {{.Names}}\t{{.RunningFor}}\t{{.Status}}' '' are usefull to see a container that could be going down and up and this can be correlated with issues seen with "show cluster" command. 6. The new F5OS has much better hardware diagnostics than the old devices, so no more the need to do EUD tests as all system hardware components and their health can be viewed from the GUI or CLI and also this is shown in F5 ihealth! https://techdocs.f5.com/en-us/velos-1-5-0/velos-systems-administration-configuration/title-system-settings.html 7. For Velos and rSeries always keep the software up to date as for example I will give with the Velos 1.5.1 the cluster rebuild because of the openshift ssl cert being 1 year is much simpler or the F5 rSeries and the Cisco Nexus issues or the corrupt Qkview generation when the GUI not the CLI is used (the velos cluster rebuild with touch /var/omd/CLUSTER_REINSTALL can solve many issues but it will cause some timeout): http://cdn.f5.com/product/bugtracker/ID1135853.html https://my.f5.com/manage/s/article/K000092905 https://support.f5.com/csp/article/K79603072 In the future ''docker'' commands could be not available but then just use "crictl" as this replaces the docker init system for kubernetes.2.7KViews2likes3Comments01 - Visualization of F5 BIG-IP metrics on Grafana using Prometheus and Telemetry Streaming service
This user guide isall about configuration and deployment of telemetry streaming service on F5 BIG-IP device and scraps those metrics by Prometheus which will be finally visualized by the Grafana. One can select the relevant metrics scraped by the Prometheus and visualize them on the Grafana which will be demonstrated later in the guide. Note: More detailed steps along with configuration images can be found on : https://nishalrai.com.np/2022/08/18/visualization-of-f5-big-ip-metrics-on-grafana-using-prometheus-and-telemetry-streaming-service/ This guide is heavily based on the work performed by Michael O'Leary and one can view onhere. The purpose of this guide is to document a little more elaborated guide for both learning and deployment aspects and also address the possible issues that could be faced during the process of deployment. Telemetry streaming (TS)is an iControl LX extension delivered as a TMOS-independent RPM file with the ability to declaratively aggregate, normalize and forward statistics and events from the BIG-IP to a consumer application by posting a single TS JSON declaration to TS’s declarative REST API endpoint. Additional information about Telemetry streaming can be foundhere. Prometheus is an open-source monitoring solution that stores time series data like metrics whereas Grafana allows visualizing the data stored in Prometheus and also supports a wide range of other sources. A short briefing about the architecture diagram in case of this user-deployment case scenario, the F5 BIG-IP system is on standalone mode with a management IP of 172.20.100.173, and both Prometheus and Grafana services are running on the same host with an IP address of 192.168.180.191 where the service port for Prometheus is on default – 9090 and the service port for Grafana is 5000. The whole deployment guide is broadly divided into the following sections and one can jump to the required step if they have achieved the previous configuration successfully: Section I: Download and install Telemetry Streaming Section II: Telemetry Streaming Declaration on the F5 BIG-IP device Section III: Configuration of Prometheus Section IV: Configuration on Grafana using Prometheus as a data source Section I: Download and install Telemetry Streaming We need to first download and install the telemetry streaming package on the F5 BIG-IP device. Since the telemetry streaming package is an RPM file that can be downloaded and can install through GUI or curl command on the CLI of the F5 BIG-IP device. In this user manual guide, we will download and then upload the telemetry streaming package on the BIG-IP using the iControl/iApp LX framework. One can use the alternative way which can be foundhere. First, we need to download the RPM file, one can find the latest telemetry streaming RPM file on the F5 Telemetry site on GitHub and download the latest RPM file. The GitHub page to download telemetry streaming can be foundhere. After downloading the file, you need to access your F5 BIG-IP GUI with your admin privilege account then follow the following steps: Go To iApps module > Package Management LX > Import > Browse to the downloaded location > Select Section II: Telemetry Streaming Declaration on the F5 BIG-IP device Once the download and installation of the F5 telemetry streaming package have been completed, we need to send a Telemetry Streaming declaration to configure a Telemetry Streaming pull consumer target. Before we jump into this configuration, we need to create a new user with an administrator role on the F5 BIG-IP device and you can just continue with the default admin user on the further configuration. We can create a new user in the following steps: Go to System > Users > User List Click on Create button Input the new user’s name and password Select role as administrator then add Click on the Finished button As we’re using Prometheus on this user-guide manual so, the Telemetry Streaming consumer target will be Prometheus which is hosted on 192.168.180.191:5000 We can either use Postman or using curl command on the CLI of the F5 BIG-IP device to configure a Telemetry Streaming pull consumer target. Configuration using Postman application Just follow the following steps for the configuration of the telemetry streaming consumer target using the Postman application. Step I: Open the Postman and create a new tab Step II: Select the GET method and paste the following link https://<big-ip-management-ip-address>/mgmt/shared/telemetry/declare Step III: Browse on Auth field and fill up the credentials Use the credentials used to log into F5 BIG-IP (in this case, recently created new user) Step IV: Select on Body option Change the method into POST, then select raw sub-option and then JSON data format. Past the Telemetry Streaming declaration on the body section and then click on the send button. { "class": "Telemetry", "My_Poller": { "class": "Telemetry_System_Poller", "interval": 0 }, "My_System": { "class": "Telemetry_System", "enable": "true", "systemPoller": [ "My_Poller" ] }, "metrics": { "class": "Telemetry_Pull_Consumer", "type": "Prometheus", "systemPoller": "My_Poller" } } Step V: Verify the response as the success status Select GET HTTP method on https://<big-ip-management-ip-address>/mgmt/shared/telemetry/declare Step VI: Verify the available metrics Create a new tab on Postman: -On the URL section https://<big-ip-management-ip-address>/mgmt/shared/telemetry/pullconsumer/metrics -On the authorization section, use the same credentials used before One can find the second part of the configuration on the following link: https://community.f5.com/t5/technical-forum/02-visualization-of-f5-big-ip-metrics-on-grafana-using/td-p/3004332.3KViews2likes1CommentWorldTech IT - Who Ya Gonna Call? Scary Hack Story
At WorldTech IT, our specialty for Always-On emergency support means that when things go bump in the night on F5 devices, we're the ones who wake up and investigate. We've seen our fair share of headless entities, killer bugs, gremlins, possessions, zombies, and daemons, but our scariest hack story started like any other day.2.2KViews8likes2CommentsPrevent BIG-IP Edge Client VPN Driver to roll back (or forward) during PPP/RAS errors
If you (like some of my customers) want to have the BIG-IP Edge Client packaged and distributed as a software package within your corporate infrastructure and therefore have switched off automatic component updates in your connectivity profiles, you might still get the covpn64.sys file upgraded or downgraded to the same version as the one installed on the BIG-IP APM server. Background We discovered that on some Windows clients the file covpn64.sys file got a newer/older timestamp in and started to investigate what caused this. The conclusion was that sometimes after hibernation or sleep, the Edge Client is unable to open the VPN interface and therefore tries to reinstall the driver. However, instead of using a local copy of the CAB file where the covpn64.sys file resides, it downloads it from the APM server regardless of if the version on the server and client match each other or not. In normal circumstances when you have automatic upgrades on the clients, this might not be a problem, however when you need to have full control on which version is being used on each connected client, this behavior can be a bit of a problem. Removing the Installer Component? Now you might be thinking, hey… Why don't you just remove the Component Installer module from the Edge Client and you won't have this issue. Well the simple answer to this is the fact that the Component Installer module is not only used to install/upgrade the client. In fact, it seems like it's also used when performing the Machine Check Info from the Access Policy when authenticating the user. So by removing the Component Installer module result in other issues. The Solution/workaround The Solution I came up with is to store each version of the urxvpn.cab file in an IFile and then use an iRule to deliver the correct version whenever a client tries to fetch the file for reinstallation. What's needed? In order to make this work we need to Grab a copy of urxvpn.cab from each version of the client Create an IFile for each of these versions Install iRule Attach iRule to the Virtual Server that is running the Access Policy Fetching the file from the apmclients ISOs For every version of the APM client that is available within your organization a corresponding iFile needs to be created. To create the iFiles automatically you can do the following on the APM server. Login to the CLI console with SSH Make sure you are in bash by typing bash Create temporary directories mkdir /tmp/apm-urxvpn mkdir /tmp/apm-iso Run the following (still in bash not TMSH) on the BIG-IP APM server to automatically extract the urxvpn.cab file from each installed image and save them in the folder /tmp/apm-urxvpn. for c in /shared/apm/images/apmclients-* do version="$(echo "$c" | awk -F. \ '{gsub(".*apmclients-","");printf "%04d.%04d.%04d.%04d", $1, $2, $3, $4}')" && \ (mount -o ro $c /tmp/apm-iso cp /tmp/apm-iso/sam/www/webtop/public/download/urxvpn.cab \ /tmp/apm-urxvpn/URXVPN.CAB-$version umount /tmp/apm-iso) done Check the files copied ls -al /tmp/apm-urxvpn Import each file either with tmsh or with GUI. We will cover how to import with tmsh below. If you prefer to do it with the GUI, more information abour how to do it can be found in K13423 You can use the following script to automatically import all files cd /tmp/apm-urxvpn for f in URXVPN.CAB-* do printf "create sys file ifile $f source-path file:$(pwd)/$f\ncreate ltm ifile $f file-name $f\n" | tmsh done Save the new configuration tmsh -c “save sys config” Time to create the iRule when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable } when HTTP_REQUEST { set uri [HTTP::uri] set ua [HTTP::header "User-Agent"] if {$uri starts_with "/vdesk" || $uri starts_with "/pre"} { set version "" regexp -- {EdgeClient/(\d{4}\.\d{4}\.\d{4}\.\d{4})} $ua var version if {$version != ""} { table set -subtable vpn_client_ip_to_versions [IP::client_addr] $version 86400 86400 } else { log local0.debug "Unable to parse version from: $ua for IP: [IP::client_addr] URI: $uri" } } elseif {$uri == "/public/download/urxvpn.cab"} { set version "" regexp -- {EdgeClient/(\d{4}\.\d{4}\.\d{4}\.\d{4})} $ua var version if {$version == ""} { log local0.warning "Unable to parse version from: $ua, will search session table" set version [table lookup -subtable vpn_client_ip_to_versions [IP::client_addr]] log local0.warning "Version in table: $version" } if {$version == ""} { log local0.warning "Unable to find version session table" HTTP::respond 404 content "Missing version in request" "Content-Type" "text/plain" } else { set out "" catch { set out [ifile get "/Common/URXVPN.CAB-$version"] } if {$out == ""} { log local0.error "Didn't find urxvpn.cab file for Edge Client version: $version" HTTP::respond 404 content "Unable to find requested file for version $version\n" "Content-Type" "text/plain" } else { HTTP::respond 200 content $out "Content-Type" "application/vnd.ms-cab-compressed" } } } } Add the iRule to the APM Virtual Server Known Limitations If multiple clients with different versions of the Edge Client are behind the same IP address, they might download the wrong version. This is due to the fact that the client doesn't present the version when the request for the file urxvpn.cab reaches the iRule. This is why the iRule tries to store IP addresses based on the source IP address of other requests related to the VPN. More information about this problem can be found in K0001327351.8KViews6likes1CommentKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Kubernetes RIGHT NOW!! - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM1.7KViews2likes2CommentsHow to create a wideip on F5 GTM DNS using Python and REST API
I have covered how to create GTM/DNS objets like Wideip, Pool, Datacenter, Server and Virtual server using REST API and Python in this blog post https://f5automation.xyz/articles/How-to-create-wideip-F5-GTM-DNS-using-Python1.5KViews1like1CommentStep-by-step guide to build a F5 AWAF lab on Google Cloud
This is a small step by step guide on how to build a F5 AWAF (Advanced Web Application Firewall) lab environment on GCP (Google Cloud Platform). The purpose of this guide is to provide an easy way for quickly spin up a lab environment which can be used for study or demo purposes. https://github.com/pedrorouremalta/f5-awaf-lab-on-gcp1.3KViews5likes2CommentsStable point of Configuration " Rescue Point "
Hello Everyone, Sometimes we need to save our configuration after big modifications to a stable point. I know that we can take auser configuraton set "UCS"to save our Configuration with the new modification , But what about defining a new object or a stable point which I can save the new configuartion to it after making sure that all services and the Device work well without issues. I suggest this because in busy networks such as ISPs , The governments or Banks ..etc which have several numbers of F5 appliances with multiple administarors to configure these appliances and maintain a huge number of Critical services. So I suggest to define a " Rescue point " and save the new configuration to after validation that all services work well after new changes in configuration files or any change to F5 appliance at all , this will enable us to rollback any new changes that impacted running services or F5 resources .... etc. To show my suggestion more : - when we need to create UCS file We issue ( #tmsh save sys ucs UCS_NAME.ucs ) so similarly we can issue new command to define a " Rescue point " Like this (#tmsh save sys Rescue ) and issue such this command ( #tmsh load sys Rescue ) to rollback again to Rescue point which is a validated stable point for F5 system at all. Briefly : Defining a stable point we can call it the " Rescue point " which contains a Good status for configuration and running services , and Returning back to it again if anyone made a change that impacted the running services. Also If a Compare option added to F5 to see the difference between " Rescue Configurations " and " New added Configuration " to see exactly what added to F5 Configuration made this impact. Also this will help to restore impacted services quickly without the need of Loading UCS files. This suggestion not a replacement for UCS files , it’s only a light way can help for quick services Restoration , facilitating troublshooting or to be another option for Administrators to Backup their valid changes. I would like to see experts thoughts about this suggestion Thanks1.2KViews1like1Comment