Power of tmsh commands using Ansible
Why is data important Having accurate data has become an integral part of decision making. The data could be for making simple decisions like purchasing the newest electronic gadget in the market or for complex decisions on what hardware and/or software platform works best for your highly demanding application which would provide the best user experience for your customer. In either case research and data collection becomes essential. Using what kind of F5 hardware and/or software in your environment follows the same principals where your IT team would require data to make the right decision. Data could vary from CPU, Throughput and/or Memory utilization etc. of your F5 gear. It could also be data just for a period of a day, a month or a year depending the application usage patterns. Ansible to the rescue Your environment could have 10's or maybe 100 or even 1000's of F5 BIG-IP's in your environment, manually logging into each one to gather data would be a highly inefficient method. One way which is a great and simple way could be to use Ansible as an automation framework to perform this task, relieving you to perform your other job functions. Let's take a look at some of the components needed to use Ansible. An inventory file in Ansible defines the hosts against which your playbook is going to run. Below is an example of a file defining F5 hosts which can be expanded to represent your 10'/100's or 1000's of BIG-IP's. Inventory file: 'inventory.yml' [f5] ltm01 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm02 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm03 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm04 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm05 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 A playbook defines the tasks that are going to be executed. In this playbook we are using the bigip_command module which can take as input any BIG-IP tmsh command and provide the output. Here we are going to use the tmsh commands to gather performance data from the BIG-IP's. The output from each of the BIG-IP's is going to be stored in a file that can be referenced after the playbook finished execution. Playbook: 'performance-data/yml' --- - name: Create empty file hosts: localhost gather_facts: false tasks: - name: Creating an empty file file: path: "./{{filename}}" state: touch - name: Gather stats using tmsh command hosts: f5 connection: local gather_facts: false serial: 1 tasks: - name: Gather performance stats bigip_command: provider: server: "{{server}}" user: "{{user}}" password: "{{password}}" server_port: "{{server_port}}" validate_certs: "{{validate_certs}}" commands: - show sys performance throughput historical - show sys performance system historical register: result - lineinfile: line: "\n###BIG-IP hostname => {{ inventory_hostname }} ###\n" insertafter: EOF dest: "./{{filename}}" - lineinfile: line: "{{ result.stdout_lines }}" insertafter: EOF dest: "./{{filename}}" - name: Format the file shell: cmd: sed 's/,/\n/g' ./{{filename}} > ./{{filename}}_formatted - pause: seconds: 10 - name: Delete file hosts: localhost gather_facts: false tasks: - name: Delete extra file created (delete file) file: path: ./{{filename}} state: absent Execution: The execution command will take as input the playbook name, the inventory file as well as the filename where the output will be stored. (There are different ways of defining and passing parameters to a playbook, below is one such example) ansible-playbook performance_data.yml -i inventory.yml --extra-vars "filename=perf_output" Snippet of expected output: ###BIG-IP hostname => ltm01 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service223.8K258.8K279.2K297.4K112.5K' 'In212.1K209.7K210.5K243.6K89.5K' 'Out21.4K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7982836362' 'In4140403432' 'Out4140403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization1718181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] ###BIG-IP hostname => ltm02 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service202.3K258.7K279.2K297.4K112.5K' 'In190.8K209.7K210.5K243.6K89.5K' 'Out19.6K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7782836362' 'In3940403432' 'Out3740403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization2118181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] The data obtained is historical data over a period of time. Sometimes it is also important to gather the peak usage of throughout/memory/cpu over time and not the average. Stay tuned as we will discuss on how to obtain that information in a upcoming article. Conclusion Use the output of the data to learn the traffic patterns and propose the most appropriate BIG-IP hardware/software in your environment. This could be data collected directly in your production environment or a staging environment, which would help you make the decision on what purchasing strategy gives you the most value from your BIG-IP's. For reference: https://www.f5.com/pdf/products/big-ip-local-traffic-manager-ds.pdf The above is one example of how you can get started with using Ansible and tmsh commands. Using this method you can potentially achieve close to 100% automation on the BIG-IP.11KViews4likes3CommentsSMTP Load Balancing without SNAT Outbound traffic problems
Hello, I’m sorry because this is an issue that it has been reviewed in the forum, but in our case it doesn’t work and we don`t know what is the problem. We have two STMP VLANs, internal (192.168.26.0/24) and external (192.168.227.0/24). In the external we have a standard virtual server (192.168.227.11) with a SMTP pool with two servers in the internal VLAN (192.168.26.11 and 192.168.26.12). We have SNAT Automap disable because we want to keep the original source IP, so SMTP servers have its default gateway on F5 (192.168.26.1). This works OK. The problem is about outbound traffic. For example, when SMTP server tries to send outbound traffic to Internet or Exchange servers, through F5, it doesn’t work. We know internal servers can reach F5 SMTP internal floating ip (192.168.26.1) by ping, but it seems it doesn´t know what to do with traffic originated on SMTP servers, or where to send it. It also happens with any connection started in the server. We have tried to configure a 0.0.0.0/0.0.0.0:any virtual server forwarding IP enabled on internal VLAN but it doesn’t work. Traffic reaches F5 (we show IN traffic statistics), but doesn’t continue to the external VLAN. We have also tried with a default route too (0.0.0.0/0 -> 192.168.227.1), but it doesn´t work. Could you help us? Thank you very much!1.1KViews0likes11CommentsPersistence Profile Issues
I am having an interesting issue with a persistence profile. It works wonderful in QA but is not working in production. I've created a persistence profile with the following attributes: Parent Profile Universal Mirror Persistence Enabled iRule Enabled and pointed Timeout Enabled and set to 28800 seconds The iRule: when HTTP_RESPONSE { if { [HTTP::cookie exists "ASP.NET_SessionId"] } { persist add uie [HTTP::cookie "ASP.NET_SessionId"] pool po-server-https } } when HTTP_REQUEST { if { [HTTP::cookie exists "ASP.NET_SessionId"] } { persist uie [HTTP::cookie "ASP.NET_SessionId"] pool po-server-https } } In the virtual server instance I then set Default Persistence Profile to this new created profile. This all works wonderfully in QA and the client is persisted to one server based on their cookie value for ASP.NET. The pool names are correct, the cookie exists in both environments etc. but in production, the persistence is not taking place and the client is jumping between servers in the pool. Does anyone have ideas on this one or a path forward to troubleshoot this via clean logging that doesn't inundate the server?799Views0likes5CommentsBIG-IQ v. 5.0- how to view traffic statistics from installed BIG-IP devices
Hello, I have successfully added 4 BIG-IP devices to BIG-IQ from Device Management > BIG-IP DEVICES > Add Device (after installing the required framework using "update_bigip.sh"). Next, I imported the Local Traffic module (LTM) for each BIG-IP from Device Management > Services. Virtual Servers, Pools, etc. for each BIG-IP are being displayed properly in the ADC. My question is- how do I view traffic statistics for virtual servers on each BIG-IP? Specifically, I'm looking for information from the following section of a BIG-IP device: Statistics ›› Module Statistics : Local Traffic ›› Virtual Servers : "virtual_server_name" (this is where the "Traffic Details" in "Bits", "Packets", and "Connections" are displayed). -Thank youSolved699Views0likes8CommentsEvent log soap[22458]
Hello, I try to understand a log message on our F5 Big IP 13.1.1.4. Under System -> Logs -> Local Traffic, I have several entries like LogLevel:info Service:soap[22458] Event:src=127.0.0.1, user= I precise there is nothing after user :) Anyone can explain me what it means and if it is possible to filter these entries? Best regards.599Views0likes3CommentsLogging traffic from span port
Hello, I'm trying to log traffic coming from a SPAN port. The traffic is arriving properly to the F5 (if I do a tcpdump in the interface, I can see the traffic) but for some reason the F5 is not logging the traffic. Is there a special way to configure the F5 (BIG-IP)? I've searched in the documentation but I haven't found any kind of answer. Notice that I don't want the F5 to be a load balancer, I just want to it to alert in case of web attack. I've created a new profile of logging which logs all the traffic. Can anyonw help me on this? I've tried different configs but no luck. Thank you very much. Regards.399Views0likes4CommentsiRule Trigger via event or SNMP
Hi I'm using an irule to route traffic directly to AWS when my Throughput is over 900MB for now i'm monitoring the Throughput via SNMP and manual apply the irule is there any way to make this automatic posses? is there any way to trigger an irule by event?, if so is there any event for my goal?214Views0likes0Comments