ansible
103 TopicsPower of tmsh commands using Ansible
Why is data important Having accurate data has become an integral part of decision making. The data could be for making simple decisions like purchasing the newest electronic gadget in the market or for complex decisions on what hardware and/or software platform works best for your highly demanding application which would provide the best user experience for your customer. In either case research and data collection becomes essential. Using what kind of F5 hardware and/or software in your environment follows the same principals where your IT team would require data to make the right decision. Data could vary from CPU, Throughput and/or Memory utilization etc. of your F5 gear. It could also be data just for a period of a day, a month or a year depending the application usage patterns. Ansible to the rescue Your environment could have 10's or maybe 100 or even 1000's of F5 BIG-IP's in your environment, manually logging into each one to gather data would be a highly inefficient method. One way which is a great and simple way could be to use Ansible as an automation framework to perform this task, relieving you to perform your other job functions. Let's take a look at some of the components needed to use Ansible. An inventory file in Ansible defines the hosts against which your playbook is going to run. Below is an example of a file defining F5 hosts which can be expanded to represent your 10'/100's or 1000's of BIG-IP's. Inventory file: 'inventory.yml' [f5] ltm01 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm02 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm03 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm04 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm05 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 A playbook defines the tasks that are going to be executed. In this playbook we are using the bigip_command module which can take as input any BIG-IP tmsh command and provide the output. Here we are going to use the tmsh commands to gather performance data from the BIG-IP's. The output from each of the BIG-IP's is going to be stored in a file that can be referenced after the playbook finished execution. Playbook: 'performance-data/yml' --- - name: Create empty file hosts: localhost gather_facts: false tasks: - name: Creating an empty file file: path: "./{{filename}}" state: touch - name: Gather stats using tmsh command hosts: f5 connection: local gather_facts: false serial: 1 tasks: - name: Gather performance stats bigip_command: provider: server: "{{server}}" user: "{{user}}" password: "{{password}}" server_port: "{{server_port}}" validate_certs: "{{validate_certs}}" commands: - show sys performance throughput historical - show sys performance system historical register: result - lineinfile: line: "\n###BIG-IP hostname => {{ inventory_hostname }} ###\n" insertafter: EOF dest: "./{{filename}}" - lineinfile: line: "{{ result.stdout_lines }}" insertafter: EOF dest: "./{{filename}}" - name: Format the file shell: cmd: sed 's/,/\n/g' ./{{filename}} > ./{{filename}}_formatted - pause: seconds: 10 - name: Delete file hosts: localhost gather_facts: false tasks: - name: Delete extra file created (delete file) file: path: ./{{filename}} state: absent Execution: The execution command will take as input the playbook name, the inventory file as well as the filename where the output will be stored. (There are different ways of defining and passing parameters to a playbook, below is one such example) ansible-playbook performance_data.yml -i inventory.yml --extra-vars "filename=perf_output" Snippet of expected output: ###BIG-IP hostname => ltm01 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service223.8K258.8K279.2K297.4K112.5K' 'In212.1K209.7K210.5K243.6K89.5K' 'Out21.4K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7982836362' 'In4140403432' 'Out4140403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization1718181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] ###BIG-IP hostname => ltm02 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service202.3K258.7K279.2K297.4K112.5K' 'In190.8K209.7K210.5K243.6K89.5K' 'Out19.6K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7782836362' 'In3940403432' 'Out3740403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization2118181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] The data obtained is historical data over a period of time. Sometimes it is also important to gather the peak usage of throughout/memory/cpu over time and not the average. Stay tuned as we will discuss on how to obtain that information in a upcoming article. Conclusion Use the output of the data to learn the traffic patterns and propose the most appropriate BIG-IP hardware/software in your environment. This could be data collected directly in your production environment or a staging environment, which would help you make the decision on what purchasing strategy gives you the most value from your BIG-IP's. For reference: https://www.f5.com/pdf/products/big-ip-local-traffic-manager-ds.pdf The above is one example of how you can get started with using Ansible and tmsh commands. Using this method you can potentially achieve close to 100% automation on the BIG-IP.11KViews4likes3CommentsF5 Automation with Ansible Tips and Tricks
Getting Started with Ansible and F5 In this article we are going to provide you with a simple set of videos that demonstrate step by step how to implement automation with Ansible. In the last video, however we will demonstrate how telemetry and automation may be used in combination to address potential performance bottlenecks and ensure application availability. To start, we will provide you with details on how to get started with Ansible automation using the Ansible Automation Platform®: Backing up your F5 device Once a user has installed and configured Ansible Automation Platform, we will now transition to a basic maintenance function – an automated backup of a BIG-IP hardware device or Virtual Edition (VE). This is always recommended before major changes are made to our BIG-IP devices Configuring a Virtual Server Next, we will use Ansible to configure a Virtual Server, a task that is most frequently performed via manual functions via the BIG-IP. When changes to a BIG-IP are infrequent, manual intervention may not be so cumbersome. However large enterprise customers may need to perform these tasks hundreds of times: Replace an SSL Certificate The next video will demonstrate how to use Ansible to replace an SSL certificate on a BIG-IP. It is important to note that this video will show the certificate being applied on a BIG-IP and then validated by browsing to the application website: Configure and Deploy an iRule The next administrative function will demonstrate how to configure and push an iRule using the Ansible Automation Platform® onto a BIG-IP device. Again this is a standard administrative task that can be simply automated via Ansible: Delete the Existing Virtual Server Ok so now we have to delete the above configuration to roll back to a steady state. This is a common administrative task when an application is retired. We again demonstrate how Ansible automation may be used to perform these simple administrative tasks: Telemetry and Automation: Using Threshold Triggers to Automate Tasks and Fix Performance Bottlenecks Now you have a clear demonstration as to how to utilize Ansible automation to perform routine tasks on a BIG-IP platform. Once you have become proficient with more routine Ansible tasks, we can explore more high-level, sophisticated automation tasks. In the below demonstration we show how BIG-IP administrators using SSL Orchestrator® (SSLO) can combine telemetry with automation to address performance bottlenecks in an application environment: Resources: So that is a short series of tutorials on how to perform routine tasks using automation plus a preview of a more sophisticated use of automation based upon telemetry and automatic thresholds. For more detail on our partnership, please visit our F5/Ansible page or visit the Red Hat Automation Hub for information on the F5 Ansible certified collections. https://www.f5.com/ansible https://www.ansible.com/products/automation-hub https://galaxy.ansible.com/f5networks/f5_modules5.7KViews2likes1CommentConnection Refused error when running Ansible Playbook
I'm trying to run an Ansible playbook to create a new local user account on a Big-IP VE running 13.1.3.4 using the bigip_user module.I'm able to run tasks using bigip_device_info and bigip_config modules successfully, but whenever I try to run a playbook with a module to change settings (i.e. bigip_user or bigip_snmp_community) it errors out with the message:"An exception occurred during task execution. To see the full traceback, use -vvv. The error was: urllib.error.URLError: <urlopen error [Errno 111] Connection refused>" I'm new to Ansible on Big-IP platform.Any help on this is greatly appreciated. Playbook: --- - name: Add users playbook hosts: "{{ devices }}" strategy: free order: sorted connection: local gather_facts: no become: no become_method: enable ignore_errors: no collections: - f5networks.f5_modules vars: provider: server: "{{ ansible_host }}" user: <username> password: <password> validate_certs: no server_port: 443 tasks: - name: Add or update the user bigip_user: provider: "{{ provider }}" username_credential: user password_credential: password update_password: always full_name: User shell: bash partition_access: - all:admin state: present delegate_to: localhost Error: The full traceback is: Traceback (most recent call last): File "/usr/local/lib/python3.7/urllib/request.py", line 1350, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/usr/local/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/local/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/local/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/local/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/usr/local/lib/python3.7/http/client.py", line 972, in send self.connect() File "/usr/local/lib/python3.7/http/client.py", line 1439, in connect super().connect() File "/usr/local/lib/python3.7/http/client.py", line 944, in connect (self.host,self.port), self.timeout, self.source_address) File "/usr/local/lib/python3.7/socket.py", line 728, in create_connection raise err File "/usr/local/lib/python3.7/socket.py", line 716, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused Thanks, -Edson5.6KViews0likes4CommentsChecksums for F5 Supported Cloud templates on GitHub
Problem this snippet solves: Checksums for F5 supported cloud templates F5 Networks provides checksums for all of our supported Amazon Web Services CloudFormation, Microsoft Azure ARM, Google Deployment Manager, and OpenStack Heat Orchestration templates. See the README files on GitHub for information on individual templates. You can find the templates in the appropriate supported directory on GitHub: Amazon CloudFormation templates: https://github.com/F5Networks/f5-aws-cloudformation/tree/master/supported Microsoft ARM Templates: https://github.com/F5Networks/f5-azure-arm-templates/tree/master/supported Google Templates: https://github.com/F5Networks/f5-google-gdm-templates VMware vCenter Templates: https://github.com/F5Networks/f5-vmware-vcenter-templates OpenStack Heat Orchestration Templates: https://github.com/F5Networks/f5-openstack-hot F5 Ansible Modules: http://docs.ansible.com/ansible/latest/list_of_network_modules.html#f5 Because this page was getting much too long to host all the checksums for all Cloud platforms, we now have individual pages for the checksums: Amazon AWS checksums Microsoft Azure checksums Google Cloud checksums VMware vCenter checksums OpenStack Heat Orchestration checksums F5 Ansible Module checksums Code : You can get a checksum for a particular template by running one of the following commands depending on your operating system: * **Linux**: `sha512sum ` * **Windows using CertUtil**: `CertUtil –hashfile SHA512`4.5KViews0likes0CommentsHow to use Ansible with Cisco routers
Quick Intro For those who don't know, there is an Ansible plugin callednetwork_clito retrieve network device configuration for backup, inspection and even execute commands. So, let's assume we have Ansible already installed and 2 routers: I used Debian Linux here and I had to install Python 3: I've also installed pip as we can see above because I wanted to install a specific version of Ansible (2.5.+) that would allow myself to use network_cli plugin: Note: we can list available Ansible versions by just typingpip install ansible== I've also created a user namedansible: Edited Linuxsudoersfile withvisudocommand: And addedAnsible user permission to run root commands without prompting for password so my file looked liked this: Quick Set Up This is my directory structure: These are the files I used for this lab test: Note: we can replace cisco1.rodrigo.example for an IP address too. In Ansible, there is a default config file (ansible.cfg) where we store the global config, i.e. how we want Ansible to behave. We also keep the list of our hosts into an inventory file (inventory.yml here). There is a default folder (group_vars) where we can store variables that would apply to any router we ran Ansible against and in this case it makes sense as my router credentials are the same. Lastly, retrieve_backup.yml is my actual playbook, i.e. where I tell Ansible what to do. Note: I manually logged in to cisco1.rodrigo.example and cisco2.rodrigo.example to populate ssh known_hosts files, otherwise Ansible complains these hosts are untrusted. Populating our Playbook file retrieve_backup.yml Let's say we just want to retrieve the OSPF configuration from our Cisco routers. We can useios_commandto type in any command to Cisco router and useregisterto store the output to a variable: Note: be careful with the indentation. I used 2 spaces here. We can then copy the content of the variable to a file in a given directory. In this case, we copied whatever is inospf_outputvariable toospf_configdirectory. From the Playbook file above we can work out that variables are referenced between{{ }}and we might probably be wondering why do we need to append stdout[0] to ospf_output right? If you know Python, you might be interested in knowing a bit more about what's going on under the hood so I'll clarify things a bit more here. The variableospf_outputis actually a dictionary andstdoutis one of its keys. In reality, ospf_output.stdout could be represented as ospf_output['stdout'] We add the[0] because the object retrieved by the key stdout is not a string. It's a list! And[0] just represents the first object in the list. Executing our Playbook I'll create ospf_config first: And we execute our playbook by issuingansible-playbookcommand: Now let's check if our OSPF config was retrieved: We can pretty much type in any IOS command we'd type in a real router, either to configure it or to retrieve its configuration. We could also append the date to the file name but it's out of the scope of this article. That's it for now.3.5KViews1like1CommentDig deeper into Ansible and F5 integration
Basics of Ansible and F5 integration were covered in a joint webinar held earlier in March 2017. To learn more about the integration and current F5 module support along with some use cases view the webinar . We had another joint webinar in June 2017, which went into details on the integration. We spoke about how to F5 ansible modules (that will be part of upcoming Ansible 2.4 release) can be used to perfrom administrative tasks on the BIG-IP, make the BIG-IP ready for application deployment in weeks rather than in days. We also touched upon usage of F5 ansible iApps module to deploy applications on the BIG-IP. The webinar's was very well received which ended with a great Q and A session. Some of the questions that came up were how do we create playbooks for different workflows, what are the best practices that F5 recommends, can we get a sample playbook etc. We will use this forum to answer some of the questions and dig deeper into the F5 and Ansible integration. Now what really is a playbook, it is nothing but a collection of tasks that are to be performed sequentially on a system. Let us consider a use case where a customer has just purchased 20 BIG-IP’s and needs to get all of them networked and to a state where the BIG-IPs are ready to deploy applications. We can define a playbook which consists of tasks required to perform Day0 and Day1 configuration on the BIG-IPs. Lets start with Day0, now networking the system consists of assigning it a NTP and DNS server address, assigning a hostname, making some ssh customizations etc., some of these settings are common to all the BIG-IPs. The common set of configurations can be defined using the concept of a ‘role’ in ansible. Let’s define a ‘onboarding’ role which will configure the common settings like NTP, DNS and SSHD settings. PLAYBOOK FOR ONBOARDING - name: Onboarding BIG-IP hosts: bigip gather_facts: false roles: - onboarding //playbook runs tasks defined in the ‘onboarding’ role This play book will run against all the BIG-IP’s defined in the inventory host file Example of inventory host file [bigip] 10.192.73.218 10.192.73.219 10.192.73.220 10.192.73.221 The above playbook will run tasks specified in the 'onboarding' role in file main.yaml (playbooks/roles/onboarding/tasks/main.yaml) - name: Configure NTP server on BIG-IP bigip_device_ntp: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" ntp_servers: "{{ ntp_servers }}" validate_certs: False delegate_to: localhost - name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" banner: "enabled" banner_text: " {{ banner_text }}" validate_certs: False delegate_to: localhost - name: Manage BIG-IP DNS settings bigip_device_dns: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" name_servers: "{{ dns_servers }}" search: "{{ dns_search_domains }}" ip_version: "{{ ip_version }}" validate_certs: False delegate_to: localhost Variables will be referenced from the main.yaml file under default directory for the ‘onboarding’ role (playbooks/roles/onboarding/default/main.yaml) username: admin password: admin banner_text: "--------Welcome to Onboarding BIGIP----------" ntp_servers: - '172.27.1.1' - '172.27.1.2' dns_servers: - '8.8.8.8' - '4.4.4.4' dns_search_domains: - 'local' - 'localhost' ip_version: 4 The BIG-IP is now ready to deploy applications. One application is configuring the BIG-IP to securely load balance applications. This requires configuring the following on the BIG-IP Vlans Self-IPs Nodes/members (2) Pool (1) Assigning the nodes to the Pool Creating a HTTPS Virtual server Creating a redirect Virtual server, which will redirect all HTTP requests to the HTTPS virtual server (iRule is assigned to the virtual server to achieve this) This playbook will be run individually for each BIG-IP since each will use different values for VLANS/Self IP’s/Virtual server address etc. The variables values for this playbook is defined inline and not in a separate file. PLAYBOOK FOR APPLICATION DEPLOYMENT - name: creating HTTPS application hosts: bigip tasks: - name: Configure VLANs on the BIG-IP bigip_vlan: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" tag: "{{ item.tag }}" tagged_interface: "{{ item.interface }}" with_items: - name: 'External' tag: '10' interface: '1.1' - name: 'Internal tag: '11’ interface: '1.2' delegate_to: localhost - name: Configure SELF-IPs on the BIG-IP bigip_selfip: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" address: "{{ item.address }}" netmask: "{{ item.netmask }}" vlan: "{{ item.vlan }}" allow_service: "{{item.allow_service}}" with_items: - name: 'External-SelfIP' address: '10.10.10.10' netmask: '255.255.255.0' vlan: 'External' allow_service: 'default' - name: 'Internal-SelfIP' address: '192.10.10.10' netmask: '255.255.255.0' vlan: 'Internal' allow_service: 'default' delegate_to: localhost - name: Create a web01.internal node //Creating Node1 bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "192.168.68.140" name: "web01.internal" validate_certs: False delegate_to: localhost - name: Create a web02.internal node //Creating Node2 bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "192.168.68.141" name: "web02.internal" validate_certs: False delegate_to: localhost - name: Create a web-pool //Creating a pool bigip_pool: server: "{{ inventory_hostname }}" user: "admin" password: "admin" lb_method: "ratio_member" monitors: http name: "web-pool" validate_certs: False delegate_to: localhost - name: Add http node to web-pool //Assigning members to a pool bigip_pool_member: description: "HTTP Webserver-1" host: "{{ item.host }}" name: "{{ item.name }}" user: "admin" password: "admin" pool: "web-pool" port: "80" server: "{{ inventory_hostname }}" validate_certs: False with_items: - host: "192.168.168.140" name: "web01.internal" - host: "192.168.68.141" name: "web02.internal" delegate_to: localhost - name: Create a virtual server //Create a HTTPS Virtual Server bigip_virtual_server: description: "Secure web application" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "https_vs" destination: "10.10.20.120" port: 443 snat: "Automap" all_profiles: - http - clientssl pool: "web-pool" validate_certs: False delegate_to: localhost - name: Create a redirect virtual server //Create a redirect virtual server bigip_virtual_server: description: "Redirect Virtual server" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "http_redirect" destination: "10.10.20.120" validate_certs: False port: 80 all_profiles: - http all_rules: //Attach an iRule to the Virtual server - _sys_https_redirect delegate_to: localhost Bookmark this page if you are interested in learning more. We will be updating this blog with new F5 modules that are going to be supported with Ansible 2.4 release3KViews0likes26CommentsGetting started with Ansible
Ansible is an orchestration and automation engine. It provides a means for you to automate the administration of different devices, from Linux to Windows and different special purpose appliances in-between. Ansible falls into the world of DevOps related tools. You may have heard of others that play in this area as well including. Chef Puppet Saltstack In this article I'm going to briefly skim the surface of what Ansible is and how you can get started using it. I've been toying around with it for some years now, and (most recently at F5) using it to streamline some development work I've been involved in. If you, like me, are a fan of dabbling with interesting tools and swear by the "Automate all the Things!" catch-phrase, then you might take an interest in Ansible. We're going to start small though and build upon what we learn. My goal here is to eventually bring you all to the point where we're doing some crazy awesome things with Ansible and F5 products. I'll also go into some brief detail on features of Ansible that make it relatively painless to interoperate with existing F5 products. Let's get started! So why Ansible? Any time that it comes to adopting some new technology for your everyday use, inevitably you need to ask yourself "what's in it for me?". Why not just use some custom shell scripts and pssh to do everything? Here are my reasons for using Ansible. It is agent-less The only dependencies (on the remote device) are SSH and python; and even python is not really a dependency The language that you "do" stuff in is YAML. No CS degree or programming language expertise is required (Perl, Ruby, Python, etc) Extending it is simple (in my opinion) Actions are idempotent Order of operations is well-defined and work is performed top-down Many of the original tools in the DevOps space were agent-based tools. This is a major problem for environments where it's literally (due to technology or politics) impossible to install an agent. Your SLA may prohibit you from installing software on the box. Or, you might legitimately not be able to install the software due to older libraries or other missing dependencies. Ansible has no agent requirement; a plus in my book. Most of the systems that you will come across can be, today, manipulated by Ansible. It is agent-less by design. Dependency wise you need to be able to connect to the machine you want to orchestrate, so it makes sense that SSH is a dependency. Also, you would like to be able to do higher-order "stuff" to a machine. That's where the python dependency comes into play. I say dependency loosely though, because Ansible provides a way to run raw commands on remote systems regardless of whether Python is installed. For professional Ansible development though, this method of orchestrating devices is largely not recommended except in very edge cases. Ansible's configuration language is YAML. If you have never seen YAML before, this is what it looks like - name: Deploy common hosts files settings hosts: all connection: ssh gather_facts: true tasks: - name: Install required packages apt: name: "{{ item }}" state: "present" with_items: - ntp - ubuntu-cloud-keyring - python-mysqldb YAML is generally composed of simple key/value pairs, lists, and dictionaries. Contrast this with the Puppet configuration language; a special DSL that resembles a real programming language. class sso { case $::lsbdistcodename { default: { $ssh_version = 'latest' } } class { '::sso': ldap_uri => $::ldap_uri, dev_env => true, ssh_version => $ssh_version, sshd_allow_groups => $::sshd_allow_groups, } } Or contrast this with Chef, in which you must know Ruby to be able to use. servers = search( :node, "is_server:true AND chef_environment:#{node.chef_environment}" ).sort! do |a, b| a.name <=> b.name end begin resources('service[mysql]') rescue Chef::Exceptions::ResourceNotFound service 'mysql' end template "#{mysql_dir}/etc/my.conf" do source 'my.conf.erb' mode 0644 variables :servers => servers, :mysql_conf => node['mysql']['mysql_conf'] notifies :restart, 'service[mysql]' end In Ansible, work that is performed is idempotent. That's a buzzword. What does it mean? It means that an operation can be performed multiple times without changing the result beyond its initial application. If I try to add the same line to a file a thousand times, it will be added once and then will not be added again 999 times. Another example is adding user accounts. They would be added once, not many times (which might raise errors on the system). Finally, Ansible's workflow is well defined. Work starts at the top of a playbook and makes its way to the bottom. Done. End of story. There are other tools that have a declarative model. These tools attempt to read your mind. "You declare to me how the node should look at the end of a run, and I will determine the order that steps should be run to meet that declaration." Contrast this with Ansible which only operates top-down. We start at the first task, then move to the second, then the third, etc. This removes much of the "magic" from the equation. Often times an error might occur in a declarative tool due specifically to how that tool arranges its dependency graph. When that happens, it's difficult to determine what exactly the tool was doing at the time of failure. That magic doesn't exist in Ansible; work is always top-down whether it be tasks, roles, dependencies, etc. You start at the top and you work your way down. Installation Let's now take a moment to install Ansible itself. Ansible is distributed in different ways depending on your operating system, but one tried and true method to install it is via pip ; the recommended tool for installing python packages. I'll be working on a vanilla installation of Ubuntu 15.04.2 (vivid) for the remaining commands. Ubuntu includes a pip package that should work for you without issue. You can install it via apt-get . sudo apt-get install python-pip python-dev Afterwards, you can install Ansible. sudo pip install markupsafe ansible==1.9.4 You might ask "why not ansible 2.0". Well, because 2.0 was just released and the community is busy ironing out some new-release bugs. I prefer to give these things some time to simmer before diving in. Lucky for us, when we are ready to dive in, upgrading is a simple task. So now you should have Ansible available to you. SEA-ML-RUPP1:~ trupp$ ansible --version ansible 1.9.4 configured module search path = None SEA-ML-RUPP1:~ trupp$ Your first playbook Depending on the tool, the body of work is called different things. Puppet calls them manifests Chef calls them recipes and cookbooks Ansible calls them plays and playbooks Saltstack calls them formulas and states They're all the same idea. You have a system configuration you need to apply, you put it in a file, the tool interprets the file and applies the configuration to the system. We will write a very simple playbook here to illustrate some concepts. It will create a file on the system. Booooooring. I know, terribly boring. We need to start somewhere though, and your eyes might roll back into your head if we were to start off with a more complicated example like bootstrapping a BIG-IP or dynamically creating cloud formation infrastructure in AWS and configuring HA pairs, pools, and injecting dynamically created members into those pools. So we are going to create a single file. We will call it site.yaml . Inside of that file paste in the following. - name: My first play hosts: localhost connection: local gather_facts: true tasks: - name: Create a file copy: dest: "/tmp/test.txt" content: "This is some content" This file is what Ansible refers to as a Playbook. Inside of this playbook file we have a single Play (My first play). There can be multiple Plays in a Playbook. Let's explore what's going on here, as well as touch upon the details of the play itself. First, that Play. Our play is composed of a preamble that contains the following name hosts connection gather_facts The name is an arbitrary name that we give to our Play so that we will know what is being executed if we need to debug something or otherwise generate a reasonable status message. ALWAYS provide a name for your Plays, Tasks, everything that supports the name syntax. Next, the hosts line specifies which hosts we want to target in our Play. For this Play we have a single host; localhost . We can get much more complicated than this though, to include patterns of hosts groups of hosts groups of groups of hosts dynamically created hosts hosts that are not even real You get the point. Next, the connection line tells Ansible how to connect to the hosts. Usually this is the default value ssh . In this case though, because I am operating on the localhost, I can skip SSH altogether and simply say local . After that, I used the gather_facts line to tell Ansible that it should interrogate the remote system (in this case the system localhost) to gather tidbits of information about it. These tidbits can include the installed operating system, the version of the OS, what sort of hardware is installed, etc. After the preamble is written, you can see that I began a new block of "stuff". In this case, the tasks associated with this Play. Tasks are Ansible's way of performing work on the system. The task that I am running here is using the copy module. As I did with my Play earlier, I provide a name for this task. Always name things! After that, the body of the module is written. There are two arguments that I have provided to this module (which are documented more in the References section below) dest content I won't go into great deal here because the module documentation is very clear, but suffice it to say that dest is where I want the file written and content is what I want written in the file. Running the playbook We can run this playbook using the ansible-playbook command. For example. SEA-ML-RUPP1:~ trupp$ ansible-playbook -i notahost, site.yaml The output of the command should resemble the following PLAY [My first play] ****************************************************** GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Create a file] ********************************************************* changed: [localhost] PLAY RECAP ******************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 We can also see that the file we created has the content that we expected. SEA-ML-RUPP1:~ trupp$ cat /tmp/test.txt This is some content A brief aside on the syntax to run the command. Ansible requires that you specify an inventory file to provide hosts that it can orchestrate. In this specific example, we are not specifying a file. Instead we are doing the following Specifying an arbitrary string (notahost) Followed by a comma In Ansible, this is a short-hand trick to skip the requirement that an inventory file be specified. The comma is the key part of the argument. Without it, Ansible will look for a file called notahost and (hopefully) not find it; raising an error otherwise. The output of the command is shown next. The output is actually fairly straight-forward to read. It lists the PLAY s and TASK s that are running (as well as their names...see, I told you you wanted to have names). The status of the Tasks is also shown. This can be values such as changed ok failed skipped unreachable Finally, all Ansible Playbook runs end with a PLAY RECAP where Ansible will tell you what the status of the various plays on your hosts were. It is at this point where a Playbook will be considered successful or not. In this case, the Playbook was completely successful because there were not unreachable hosts nor failed hosts. Summary This was a brief introduction to the orchestration and automation system Ansible. There are far more complex subjects related to Ansible that I will touch upon in future posts. If you found this information useful, rate it as such. If you would like to see more advanced topics covered, videos demo'd, code samples written, or anything else on the subject, let me know in the comments below. Many organizations, both large and small, use DevOps tools like the one presented in this post. Ansible has several features, per design, that make it attractive to these organizations (such as being agent-less, and having minimum requirements). If you'd like to see crazy sophisticated examples of Ansible in use...well...we'll get there. You need to rate and comment on my posts though to let me know that you want to see more. References copy - Copies files to remote locations. — Ansible Documentation raw - Executes a low-down and dirty SSH command — Ansible Documentation Variables — Ansible Documentation2.6KViews0likes12CommentsAutomate Data Group updates on many Big-IP devices using Big-IQ or Ansible or Terraform
Problem this snippet solves: In many cases generated bad ip address lists by a SIEM (ELK, Splunk, IBM QRADAR) need to be uploaded to F5 for to be blocked but the BIG-IQ can't be used to send data group changes to the F5 devices. 1.A workaround to use the BIG-IQ script option to make all the F5 devices to check a file on a source server and to update the information in the external data group. I hope F5 to add the option to BIG-IQ to schedule when the scrpts to be run otherwise a cron job on the BIG-IQ may trigger the script feature that will execute the data group to refresh its data (sounds like the Matrix). https://clouddocs.f5.com/training/community/big-iq-cloud-edition/html/class5/module1/lab6.html Example command to run in the BIG-IQ script feature: tmsh modify sys file data-group ban_ip type ip source-pathhttps://x.x.x.x/files/bad_ip.txt https://support.f5.com/csp/article/K17523 2.You can also set the command with cronjob on the BIG-IP devices if you don't have BIG-IQ as you just need Linux server to host the data group files. 3.Also without BIG-IQ Ansible playbook can be used to manage many groups on the F5 devices as I have added the ansible playbook code below. Now with the windows subsystem you can run Ansible on Windows! 4.If you have AFM then you can use custom feed lists to upload the external data without the need for Ansible or Big-IQ. The ASM supports IP intelligence but no custom feeds can be used: https://techdocs.f5.com/kb/en-us/products/big-ip-afm/manuals/product/big-ip-afm-getting-started-14-1-0/04.html How to use this snippet: I made my code reading: https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/bigip_data_group_module.html https://support.f5.com/csp/article/K42420223 If you want to have an automatic timeout then you need to use the irule table command (but you can't edit that with REST-API, so see the article below as a workaround) that writes in the RAM memory that supports automatic timeout and life time for each entry then there is a nice article for that as I added comment about possible bug resolution, so read the comments! https://devcentral.f5.com/s/articles/populating-tables-with-csv-data-via-sideband-connections Another way is on the server where you save the data group info is to add a bash script that with cronjob deletes from time to time old entries. For example (I tested this). Just write each data group line/text entry with for example IP address and next to it the date it was added. cutoff=$(date -d 'now - 30 days' '+%Y-%m-%d') awk -v cutoff="$cutoff" '$2 >= cutoff { print }' <in.txt >out.txt && mv out.txt in.txt Ansible is a great automation tool that makes changes only when the configuration is modified, so even if you run the same playbook 2 times (a playbook is the main config file and it contains many tasks), the second time there will be nothing (the same is true for terraform). Ansible supports "for" loops but calls them "loop" (before time " with_items " was used) and "if else" conditions but it calls them "when" just to confuse us and the conditions and loops are placed at the end of the task not at the start 😀 A loop is good if you want to apply the same config to multiple devices with some variables just being changed and "when" is nice for example to apply different tasks to different versions of the F5 TMOS or F5 devices with different provisioned modules. https://stackoverflow.com/questions/38571524/remove-line-in-text-file-with-bash-if-the-date-is-older-than-30-days Code : --- - name: Create or modify data group hosts: all connection: local vars: provider: password: xxxxx server: x.x.x.x user: xxxxx validate_certs: no server_port: 443 tasks: - name: Create a data group of IP addresses from a file bigip_data_group: name: block_group records_src: /var/www/files/bad.txt type: address provider: "{{ provider }}" notify: - Save the running configuration to disk handlers: - name: Save the running configuration to disk bigip_config: save: yes provider: "{{ provider }}" The "notify" triggers the handler task after the main task is done as there is no point in saving the config before that and the handler runs only on change, Tested this on version: 15.1 Also now F5 has Terraform Provider and together with Visual Studio you can edit your code on Windows and deploy it from the Visual Studio itself! Visual Studio wil even open for you the teminal, where you can select the folder where the terraform code will be saved after you have added the code run terraform init, terraform plan, terraform apply. VS even has a plugin for writting F5 irules.Terraform's files are called "tf" and the terraform providers are like the ansible inventory file (ansible may also have a provider object in the playbook not the inventory file) and are used to make the connection and then to create the resources (like ansible tasks). Usefull links for Visual Studio and Terraform: https://registry.terraform.io/providers/F5Networks/bigip/1.16.0/docs/resources/bigip_ltm_datagroup https://www.youtube.com/watch?v=Z5xG8HLwIh4 For more advanced terafform stuff like for loops and if or count conditions: https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9 Code : You may need to add also this resource below as to save the config and with "depends_on" it wil run after the date group is created. This is like the handler in Ansible that is started after the task is done and also terraform sometimes creates resources at the same time not like Ansible task after task, resource "bigip_command" "save-config" { commands = ["save sys config"] depends_on = [ bigip_ltm_datagroup.terraform-external1 ] } Tested this on version: 16.1 Ansible and Terraform now can be used for AS3 deployments like the BIG-IQ's "applications" as they will push the F5 declarative templates to the F5 device and nowadays even the F5 AWAF/ASM and SSLO (ssl orchestrator) support declarative configurations. For more info: https://www.f5.com/company/blog/f5-as3-and-red-hat-ansible-automation https://clouddocs.f5.com/products/orchestration/ansible/devel/f5_bigip/playbook_tutorial.html https://clouddocs.f5.com/products/orchestration/terraform/latest/userguide/as3-integration.html https://support.f5.com/csp/article/K23449665 https://clouddocs.f5.com/training/fas-ansible-workshop-101/3.3-as3-asm.html https://www.youtube.com/watch?v=Ecua-WRGyJc&t=105s2.5KViews2likes1CommentBIG-IP ASM Automation with Ansible
My Background Back in September I started my Ansible journey, coming from no knowledge about Ansible and its automation capabilities I was asked to develop some code/playbooks to automate some of the BIG-IP's ASM functions for AnsibleFest 2019. I was pleasantly surprised on how easy it was to install Ansible, build playbooks and deliver the correct end-state for the BIG-IP.The playbooks and automation took me back down memory lane to when I was creating a universal network bootable Norton Ghost CD in DOS for all of the different models of PCs my work owned. The team I work for (Business Development) has been working hard at making sure our code is easily accessible to customers through GitHub.Our goal is to provide the necessary tools such as F5 Automation Sandbox and use-cases so that even if you are new to Ansible, or a die-hard coder with Ansible that there is a place for you to test, consume and bring life to the code. What is BIG-IP ASM? F5 BIG-IP® Application Security Manager™ (ASM) is a flexible web application firewall that secures web applications in traditional, virtual, and private cloud environments. BIG-IP ASM helps secure applications against unknown vulnerabilities, and enables compliance for key regulatory mandates. BIG-IP ASM is a key part of the F5 application delivery firewall solution, which consolidates traffic management, network firewall, application access, DDoS protection, SSL inspection, and DNS security. What is Ansible? Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs. Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time. It uses no agents and no additional custom security infrastructure, so it's easy to deploy - and most importantly, it uses a very simple language (YAML, in the form of Ansible Playbooks) that allow you to describe your automation jobs in a way that approaches plain English. What does the Code Do? IP Blocking - In ASM, there is a feature called IP address intelligence that can allow or block IP addresses from being able to access protected applications. This code creates a Virtual IP (VIP) and a blank ASM policy attached to that VIP. After the creation the code exports the ASM Policy into an XML format and is then modified by the code snip-it below to add blocked IP addresses and re-import that policy over the existing one. Prior to this snip-it we have code that checks to see if the IP address already exists for things like re-runs of the code and blocks duplicate IP addresses from being added to the XML. This is a snip-it of the Code where it modifies the ASM Policy XML File (this was exported in previous steps in the code) #Import Additional Disallowed IPs - name: Add Disallowed IPs xml: path: "{{ ASM_Policy_File }}" pretty_print: yes input_type: xml insertafter: yes xpath: /policy/geolocation add_children: "<whitelist><ip_address>{{ item.item }}</ip_address><subnet_mask>255.255.255.255</subnet_mask><policy_builder_trusted>false</policy_builder_trusted><ignore_anomalies>false</ignore_anomalies><never_log>false</never_log><block_ip>Always</block_ip><never_learn>false</never_learn><description>blocked</description><ignore_ip_reputation>false</ignore_ip_reputation></whitelist>" with_items: "{{ Blocked_IP_Valid.results }}" when: Blocked_IPs is defined and item.rc == 1 Here is a demonstration of an IP being blocked and unblocked by the BIG-IP ASM Policy. Disallowed URL Filtering - Another feature of ASM is the ability to disallowed URLs, this can be useful when working internally vs. externally and there are other reasons to why a specific URL would be blocked or protected by BIG-IP ASM. This code can be used independently, cooperatively, or not at all with this playbook. Since this playbook is merged with the IP Blocking code it follows the same flow (exporting/importing XML and error checking) as previously mentioned in the IP Blocking to ensure no duplicates are made in the XML. This is a snip-it of the Code where it modifies the ASM Policy XML File (this was exported in previous steps in the code) #Import Additional Disallowed URLs - name: Add Disallowed URLs xml: path: "{{ ASM_Policy_File }}" input_type: xml pretty_print: yes xpath: /policy/urls/disallowed_urls add_children: - "<url protocol=\"HTTP\" type=\"explicit\" name=\"{{ item.item }}\"/>" - "<url protocol=\"HTTPS\" type=\"explicit\" name=\"{{ item.item }}\"/>" with_items: "{{ Blocked_URLs_Valid.results }}" when: Blocked_URLs is defined and item.rc == 1 Here is a demonstration of specific URLs being blocked by the BIG-IP ASM Policy. (Note: the File Name in the repo has been changed but does the same use-case ) Where can you access the Playbook for this integration? https://github.com/f5devcentral/f5-bd-ansible-usecases/tree/master/03-F5-WAF-Policy-Management How to get all of the use-cases currently available. https://github.com/f5devcentral/f5-bd-ansible-usecases Want to try it out but need a Lab to work in? Try out our F5 Automation Sandbox built for AWS! https://clouddocs.f5.com/training/automation-sandbox/2.2KViews0likes7CommentsExtract content of Certificate key file with REST or Ansible
Hi Community, I'm working on an automation for renewing Certificates on multiple BIG-IP's using Ansible. As not all available Ansible F5 modules provide what is required, I'm currently using a mix of modules and REST calls (which is call from Ansible). F5 Module Index What works so far is: Create new CSR/Key on BIG-IP Get new "CA based" Cert and upload to the BIG-IP Upload the same Cert to other BIG-IP's Update SSL profiles on multiple BIG-IP's and some others tasks, like irules..etc Anyhow, what doesnt work so far is to get the content of the key which was created on the first device together with the CSR. Basically I dont have the key which needs to be uploaded to the other BIG-IP's as well. From the CLI, the following gives me what I need: cat/config/filestore/files_d/Common_d/certificate_key_d/*name.key* The problem with this is, I cant integrate it in Ansible using the bigip_command – Run TMSH and BASH commands on F5 devices module. Looks like only tmsh commands are supported even though it states BASH as well. Plus I try to avoid using this module whenever possible in a first place. Through the GUI, simple export and import on an other device - done, but obviously not automated. I have tried all possible Ansible modules as well as REST calls, but dont get the content out of the .key file. I thought that this would/should be a simple tasks. If anyone's done this using any approach please share. I could create a new key and get a cert for each device, but first try to find out if there's another way. Thanks in advance, Stefan2.1KViews0likes4Comments