Ansible
23 TopicsInfrastructure as Code: Automating F5 Distributed Cloud CEs with Ansible
Introduction Welcome to the first installment of our Infrastructure as Code (IaC) series, focusing on F5 products and Ansible. This series has been a long-standing desire of mine to showcase the ability of IaC utilizing Ansible Automation Platform to deliver Day 0 through Day 2 operations with multiple F5 virtualized platforms. Over time, I've encountered numerous financial clients expressing interest in this topic. For many of these clients, the prospect of leveraging IaC to redeploy an environment outweighs the traditional approach of performing upgrades. This series will hopefully provide insight, documentation, and code for anyone embarking on this journey. Why Ansible Automation Platform? Like most people, I started my journey with community editions of Ansible. As my coding became more complex, so did the need to ensure that my lab infrastructure adhered to the best security guidelines required by my company (my goal being to mimic how customers would/should do things in real life). I began utilizing Ansible Automation Platform to ensure my credentials were protected, as well as to organize and share my code with the rest of my team (following the 'just in case you got hit by a bus' theory). Ansible Automation Platform utilizes execution environments (EE) to ensure code runs efficiently and cleanly every time. Now, I am also creating Execution Environments via GitHub with workflows and pushing them up to Quay.io (https://github.com/VDI-Tech-Guy/f5-execution-engines). Huge thanks to Colin McNaughton at Red Hat for making my life so much easier with building EEs! Why deploy F5 Distributed Cloud on VMware vSphere? As I mentioned before, I had this desire to build this Infrastructure as Code (IaC) code a while back. This was prior to the Broadcom acquisition of VMware. Being an ex-VMware employee, I had a lot of knowledge of virtualization platform infrastructure going into this project, and I started my focus on deploying on VMware vSphere. F5 Distributed Cloud can be deployed in any cloud, anywhere. However, I really wanted to focus on on-premises deployments because not every customer can afford the cloud. Moreover, there's always a back-and-forth battle between on-premises and the cloud, which has evolved into the Hybrid Cloud and the Multi-Cloud. I do intend to extend this series to the Multi-Cloud, but these initial deployments will be focused on VMware vSphere, as it is still utilized in many organizations across the globe. Information about the Setup in the Demo Video If you watch the video (down below) on how the deployment works, you can see i did a bunch of the pre-work prior to launching the deployment, in the git repostory (link in Resources). Here are some Prework items i did Had a fully functional Ansible Automation Platform 2.4+ enviornment setup and working. (at the time the controller version was 4.4.4) Execution Environment was imported into Ansible Automation Platform Controller The Project was setup to import the Playbooks from the Git Repository (In Resources Section below) and setup the Default Execution Environment Demo Inventory was setup (in our usecase we only needed the vCenter Host) We Setup Network Credentials for the vCenter The Template was setup and had Variables populated in it (Note the API Key was hidden). As mentioned in the Video (Below) The variables were populated to my environment, this contains all the information, i have provided a Demo Example in the git repository for anyone to mimic my settings to their environment, also the example has comments about each field or area of a field and the purpose of the variable. { "rhel_location": "https://vesio.blob.core.windows.net/releases/rhel/9/x86_64/images/vmware/rhel-9.2023.29-20231212012955-single-nic.ova", "xc_api_credential": "_____________________________________", "xc_namespace": "mmabis-automation", "xc_console_host": "f5-bd", "xc_user": "admin", "xc_pass": "Ansible123!", "vcenter_hostname": "{{ ansible_host }}", "vcenter_username": "{{ ansible_env.ANSIBLE_NET_USERNAME }}", "vcenter_password": "{{ ansible_env.ANSIBLE_NET_PASSWORD }}", "vcenter_validate_certs": false, "datacenter_name": "Apex", "cluster_name": "Worlds-Edge", "datastore": "TrueNAS-SSD", "dvs_switch_name": "DSC-DVS", "dns_name_servers": [ "192.168.192.20", "192.168.192.1" ], "dns_name_search": [ "dsc-services.local", "localdomain" ], "ntp_servers": [ "0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org" ], "domain_fqdn": "dsc-services.local", "DVS_Name": "{{dvs_switch_name}}", "Internal_Network": "DVS-Server-vLan", "External_Network": "DVS-DMZ-vLan", "resource_pool_name": "Lab-XC", "waiting_period": 2, "temp_download_location": "/tmp/xc-ova-download.ova", "xc_ova_builds": [ { "hostname": "xc-automation-rhel-demo", "tmpl_name": "xc-automation-rhel-demo", "admin_password": "Ansible123!", "cluster_name": "xc-automation-cluster-rhel-demo", "dhcp": "no", "external_ip": "172.16.192.170", "external_ip_subnet_prefix": "24", "external_ip_gw": "172.16.192.1", "external_ip_route": "0.0.0.0/0", "internal_ip": "192.168.192.170", "internal_ip_subnet_prefix": "22", "internal_ip_gw": "192.168.192.1", "certified_hw": "vmware-regular-nic-voltmesh", "latitude": "39.51833126", "longitude": "-104.759496962", "build_count": 3, "nic_config": "rhel-multi" } ] } Launching the Code With all of that prework Handled it was as easy as launch the code, there were a few caviats i learned over time when dealing with the atuomation that i wanted to share. Never re-use a cluster name in F5 Distributed Cloud, especially if it was used in a different version of the CE (there were communications issues with the CEs and previous cluster information that was stored in F5 Distributred Cloud Console) The Api Credentials are system level when trying to accept registration or create the token for importing in to the environment. This code is designed to check for "{{ xc-namespace}}-token" if it exists then it will utilize the existing token, if not it will try to create it so you need system level permissions to do this. Build Count should be 3 by default (still needs to be defined) or an ODD number based on recomendations i have heard from our F5 Field. If there are more that i think of ill definatly edit the post and make sure its up-to-date. When launching the code i was able to get the lab to build up correctly multiple times, so please if there is an issue or something i might not have documented well, feel free to let me know and give it a shot for yourself! YouTube Video now on DevCentral Channel Resources https://github.com/f5devcentral/f5-bd-ansible-day0-automation - The Code utilized for this deployment https://github.com/VDI-Tech-Guy/f5-execution-engines - Building Execution Environments with Github and Workflows Conclusion I do hope that this series will help everyone who wants to embrace IaC and if you have any questions feel free to reach out!501Views3likes0CommentsGetting started with Ansible
Ansible is an orchestration and automation engine. It provides a means for you to automate the administration of different devices, from Linux to Windows and different special purpose appliances in-between. Ansible falls into the world of DevOps related tools. You may have heard of others that play in this area as well including. Chef Puppet Saltstack In this article I'm going to briefly skim the surface of what Ansible is and how you can get started using it. I've been toying around with it for some years now, and (most recently at F5) using it to streamline some development work I've been involved in. If you, like me, are a fan of dabbling with interesting tools and swear by the "Automate all the Things!" catch-phrase, then you might take an interest in Ansible. We're going to start small though and build upon what we learn. My goal here is to eventually bring you all to the point where we're doing some crazy awesome things with Ansible and F5 products. I'll also go into some brief detail on features of Ansible that make it relatively painless to interoperate with existing F5 products. Let's get started! So why Ansible? Any time that it comes to adopting some new technology for your everyday use, inevitably you need to ask yourself "what's in it for me?". Why not just use some custom shell scripts and pssh to do everything? Here are my reasons for using Ansible. It is agent-less The only dependencies (on the remote device) are SSH and python; and even python is not really a dependency The language that you "do" stuff in is YAML. No CS degree or programming language expertise is required (Perl, Ruby, Python, etc) Extending it is simple (in my opinion) Actions are idempotent Order of operations is well-defined and work is performed top-down Many of the original tools in the DevOps space were agent-based tools. This is a major problem for environments where it's literally (due to technology or politics) impossible to install an agent. Your SLA may prohibit you from installing software on the box. Or, you might legitimately not be able to install the software due to older libraries or other missing dependencies. Ansible has no agent requirement; a plus in my book. Most of the systems that you will come across can be, today, manipulated by Ansible. It is agent-less by design. Dependency wise you need to be able to connect to the machine you want to orchestrate, so it makes sense that SSH is a dependency. Also, you would like to be able to do higher-order "stuff" to a machine. That's where the python dependency comes into play. I say dependency loosely though, because Ansible provides a way to run raw commands on remote systems regardless of whether Python is installed. For professional Ansible development though, this method of orchestrating devices is largely not recommended except in very edge cases. Ansible's configuration language is YAML. If you have never seen YAML before, this is what it looks like - name: Deploy common hosts files settings hosts: all connection: ssh gather_facts: true tasks: - name: Install required packages apt: name: "{{ item }}" state: "present" with_items: - ntp - ubuntu-cloud-keyring - python-mysqldb YAML is generally composed of simple key/value pairs, lists, and dictionaries. Contrast this with the Puppet configuration language; a special DSL that resembles a real programming language. class sso { case $::lsbdistcodename { default: { $ssh_version = 'latest' } } class { '::sso': ldap_uri => $::ldap_uri, dev_env => true, ssh_version => $ssh_version, sshd_allow_groups => $::sshd_allow_groups, } } Or contrast this with Chef, in which you must know Ruby to be able to use. servers = search( :node, "is_server:true AND chef_environment:#{node.chef_environment}" ).sort! do |a, b| a.name <=> b.name end begin resources('service[mysql]') rescue Chef::Exceptions::ResourceNotFound service 'mysql' end template "#{mysql_dir}/etc/my.conf" do source 'my.conf.erb' mode 0644 variables :servers => servers, :mysql_conf => node['mysql']['mysql_conf'] notifies :restart, 'service[mysql]' end In Ansible, work that is performed is idempotent. That's a buzzword. What does it mean? It means that an operation can be performed multiple times without changing the result beyond its initial application. If I try to add the same line to a file a thousand times, it will be added once and then will not be added again 999 times. Another example is adding user accounts. They would be added once, not many times (which might raise errors on the system). Finally, Ansible's workflow is well defined. Work starts at the top of a playbook and makes its way to the bottom. Done. End of story. There are other tools that have a declarative model. These tools attempt to read your mind. "You declare to me how the node should look at the end of a run, and I will determine the order that steps should be run to meet that declaration." Contrast this with Ansible which only operates top-down. We start at the first task, then move to the second, then the third, etc. This removes much of the "magic" from the equation. Often times an error might occur in a declarative tool due specifically to how that tool arranges its dependency graph. When that happens, it's difficult to determine what exactly the tool was doing at the time of failure. That magic doesn't exist in Ansible; work is always top-down whether it be tasks, roles, dependencies, etc. You start at the top and you work your way down. Installation Let's now take a moment to install Ansible itself. Ansible is distributed in different ways depending on your operating system, but one tried and true method to install it is via pip ; the recommended tool for installing python packages. I'll be working on a vanilla installation of Ubuntu 15.04.2 (vivid) for the remaining commands. Ubuntu includes a pip package that should work for you without issue. You can install it via apt-get . sudo apt-get install python-pip python-dev Afterwards, you can install Ansible. sudo pip install markupsafe ansible==1.9.4 You might ask "why not ansible 2.0". Well, because 2.0 was just released and the community is busy ironing out some new-release bugs. I prefer to give these things some time to simmer before diving in. Lucky for us, when we are ready to dive in, upgrading is a simple task. So now you should have Ansible available to you. SEA-ML-RUPP1:~ trupp$ ansible --version ansible 1.9.4 configured module search path = None SEA-ML-RUPP1:~ trupp$ Your first playbook Depending on the tool, the body of work is called different things. Puppet calls them manifests Chef calls them recipes and cookbooks Ansible calls them plays and playbooks Saltstack calls them formulas and states They're all the same idea. You have a system configuration you need to apply, you put it in a file, the tool interprets the file and applies the configuration to the system. We will write a very simple playbook here to illustrate some concepts. It will create a file on the system. Booooooring. I know, terribly boring. We need to start somewhere though, and your eyes might roll back into your head if we were to start off with a more complicated example like bootstrapping a BIG-IP or dynamically creating cloud formation infrastructure in AWS and configuring HA pairs, pools, and injecting dynamically created members into those pools. So we are going to create a single file. We will call it site.yaml . Inside of that file paste in the following. - name: My first play hosts: localhost connection: local gather_facts: true tasks: - name: Create a file copy: dest: "/tmp/test.txt" content: "This is some content" This file is what Ansible refers to as a Playbook. Inside of this playbook file we have a single Play (My first play). There can be multiple Plays in a Playbook. Let's explore what's going on here, as well as touch upon the details of the play itself. First, that Play. Our play is composed of a preamble that contains the following name hosts connection gather_facts The name is an arbitrary name that we give to our Play so that we will know what is being executed if we need to debug something or otherwise generate a reasonable status message. ALWAYS provide a name for your Plays, Tasks, everything that supports the name syntax. Next, the hosts line specifies which hosts we want to target in our Play. For this Play we have a single host; localhost . We can get much more complicated than this though, to include patterns of hosts groups of hosts groups of groups of hosts dynamically created hosts hosts that are not even real You get the point. Next, the connection line tells Ansible how to connect to the hosts. Usually this is the default value ssh . In this case though, because I am operating on the localhost, I can skip SSH altogether and simply say local . After that, I used the gather_facts line to tell Ansible that it should interrogate the remote system (in this case the system localhost) to gather tidbits of information about it. These tidbits can include the installed operating system, the version of the OS, what sort of hardware is installed, etc. After the preamble is written, you can see that I began a new block of "stuff". In this case, the tasks associated with this Play. Tasks are Ansible's way of performing work on the system. The task that I am running here is using the copy module. As I did with my Play earlier, I provide a name for this task. Always name things! After that, the body of the module is written. There are two arguments that I have provided to this module (which are documented more in the References section below) dest content I won't go into great deal here because the module documentation is very clear, but suffice it to say that dest is where I want the file written and content is what I want written in the file. Running the playbook We can run this playbook using the ansible-playbook command. For example. SEA-ML-RUPP1:~ trupp$ ansible-playbook -i notahost, site.yaml The output of the command should resemble the following PLAY [My first play] ****************************************************** GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Create a file] ********************************************************* changed: [localhost] PLAY RECAP ******************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 We can also see that the file we created has the content that we expected. SEA-ML-RUPP1:~ trupp$ cat /tmp/test.txt This is some content A brief aside on the syntax to run the command. Ansible requires that you specify an inventory file to provide hosts that it can orchestrate. In this specific example, we are not specifying a file. Instead we are doing the following Specifying an arbitrary string (notahost) Followed by a comma In Ansible, this is a short-hand trick to skip the requirement that an inventory file be specified. The comma is the key part of the argument. Without it, Ansible will look for a file called notahost and (hopefully) not find it; raising an error otherwise. The output of the command is shown next. The output is actually fairly straight-forward to read. It lists the PLAY s and TASK s that are running (as well as their names...see, I told you you wanted to have names). The status of the Tasks is also shown. This can be values such as changed ok failed skipped unreachable Finally, all Ansible Playbook runs end with a PLAY RECAP where Ansible will tell you what the status of the various plays on your hosts were. It is at this point where a Playbook will be considered successful or not. In this case, the Playbook was completely successful because there were not unreachable hosts nor failed hosts. Summary This was a brief introduction to the orchestration and automation system Ansible. There are far more complex subjects related to Ansible that I will touch upon in future posts. If you found this information useful, rate it as such. If you would like to see more advanced topics covered, videos demo'd, code samples written, or anything else on the subject, let me know in the comments below. Many organizations, both large and small, use DevOps tools like the one presented in this post. Ansible has several features, per design, that make it attractive to these organizations (such as being agent-less, and having minimum requirements). If you'd like to see crazy sophisticated examples of Ansible in use...well...we'll get there. You need to rate and comment on my posts though to let me know that you want to see more. References copy - Copies files to remote locations. — Ansible Documentation raw - Executes a low-down and dirty SSH command — Ansible Documentation Variables — Ansible Documentation2.6KViews0likes12CommentsExisting Ansible BIG-IP modules
Right around the time that I started at F5, I was at the pinnacle of my exposure to Ansible. So imagine my surprise when I saw BIG-IP modules in the Ansible core product! I immediately wanted to know which one of my colleagues I could go talk Ansible-fu with! I cracked open the source code, found the names of contributors, made haste to the corporate phone book to look up where they sat, and...wait a second... ...no entries in the phone book? ...that curious look in colleagues eyes that says "I haven't the foggiest idea what you are talking about". Then it hit me...the BIG-IP modules didn't originate at F5. Upon further investigation, it became clear to me that, indeed, we had no skin in this game. These modules originated from two enterprising individuals who I found on DevCentral, and I would be remiss to exclude their valuable contributions to the Ansible+F5 cause, so in this post I want to highlight those contributions and bring others up-to-speed on current Ansible+BIG-IP functionality. Credit where it is due As far as I can tell from perusing Ansible's git logs, the existing BIG-IP modules in Ansible are the work of Matt Hite and Serge van Ginderachter. Both are DevCentral contributors, and have had a presence in the Ansible community from (at least) 2013 when their modules landed in the tool. Among the modules that they had a hand in are bigip_facts bigip_node bigip_pool bigip_pool_member bigip_monitor_http bigip_monitor_tcp Since that time, I've seen even more folks step up to the plate with Ansible modules that will be making their appearance in upcoming releases. Well, I wanted to have a hand in this work too. I figured I had a perfect opportunity to help because, I had some background in Ansible I had access to unlimited BIG-IP technical resources It was only a question of how to get involved. And then, Matt posted this in a pull request (PR)... Unfortunately I don't have GTM to smoke test this with. Can you solicit some testers on the mailing list? And Serge followed with So same here, not tested as no GTM gear, but an offline code review. Bingo...I was in. If there's one area that I figured I could have the greatest impact, it's the area of technical resources. There is BIG-IP stuff all over the place at F5, and I sit next to, or at least near, many of the people who have knowledge of the products; far more advanced knowledge than I do. Many of those same people hang out and contribute on DevCentral. Paired with my knowledge of Ansible, I figured I could help test BIG-IP related PRs to validate their functionality beyond what a code review offered. So that's what I did (and what we're about to do). Before you begin If you were around for the earlier introductory article to Ansible that I posted a couple days ago, pull up a terminal to that machine. We need to install one more dependency that is common to all of the BIG-IP modules in Ansible; bigsuds . pip install bigsuds With that in place, you're ready to work with the Ansible modules. Additionally, I am going to be using a very minimal inventory file that just includes a single BIG-IP. You will want to adjust yours for your environment, but here is mine. [test] big-ip01.internal Note that the name above is available in my local DNS. If you only have an IP address to work with, you can just add that to your inventory file. For example [test] 192.168.10.2 Also note that I included a line called [test] . In Ansible this is referred to as a group. We will be revisiting this in the future as we begin orchestrating multiple BIG-IPs. Let's dive in to some of Matt and Serge's BIG-IP modules! bigip_facts In Ansible, there is a pattern you often see over and over. This is the pattern of modules that provide information, and modules that change settings. For this tutorial I'll refer to them as, reader modules writer modules Without exception, reader modules are suffixed with an _facts string. While writer modules, are not. So, with that in mind...pop quiz! If I asked you whether the bigip_pool module was a reader module or a writer module, you would say...writer! If I asked you whether the bigip_pool_facts module was a reader module or a writer module, you would say...reader! The bigip_facts module will return to you a number of facts about the BIG-IP in question. Let's take a look. First, my playbook. - name: Test bigip_facts hosts: test connection: local tasks: - name: Get all of the facts from my BIG-IP bigip_facts: server: "{{ inventory_hostname }}" user: "admin" password: "admin" include: "system_info" And now, let's just run it to see what sort of output it generates. ansible-playbook -i hosts site.yaml -vvvv You should be presented with information resembling the following. root@debian:~# ansible-playbook -i hosts site.yaml -vvvv PLAY [Test bigip_facts] ******************************************************* TASK: [Get all of the facts from my BIG-IP] *********************************** ... ok: [big-ip01.internal] => {"ansible_facts": {"system_info": {"base_mac_address": "8A:83:8B:B1:AE:F7", "blade_temperature": [], "chassis_slot_information": [], "globally_unique_identifier": "8A:83:8B:B1:AE:F7", "group_id": "DefaultGroup", "hardware_information": [{"model": "Common KVM processor", "name": "cpus", "slot": 0, "type": "HARDWARE_BASE_BOARD", "versions": [{"name": "cache size", "value": "4096 KB"}, {"name": "cores", "value": "2"}, {"name": "cpu MHz", "value": "2199.994"}]}], ... "os_version": "#1 SMP Mon Aug 11 19:54:07 PDT 2014", "platform": "Z100", "product_category": "Virtual Edition", "switch_board_part_revision": null, "switch_board_serial": null, "system_name": "Linux"}, "time": {"day": 28, "hour": 22, "minute": 1, "month": 1, "second": 7, "year": 2016}, "time_zone": {"gmt_offset": -8, "is_daylight_saving_time": false, "time_zone": "PST"}, "uptime": 854}}, "changed": false} root@debian:~# This module can output a lot of information that you can use in later tasks. Note that I just asked for the system_info facts, but there are a number of them that you can ask for, including address_class certificate client_ssl_profile device device_group interface key node pool rule self_ip software system_info traffic_group trunk virtual_address virtual_server vlan You can include multiple types of facts by separating them with a comma. For example - name: Test bigip_facts hosts: test connection: local tasks: - name: Get all of the facts from my BIG-IP bigip_facts: server: "{{ inventory_hostname }}" user: "admin" password: "admin" include: "system_info,software,self_ip" Returns facts representing the system_info , software , and self IPs. You can use the generated facts in later tasks by referencing their JSON keys. For example - name: Test bigip_facts hosts: test connection: local tasks: - name: Get all of the facts from my BIG-IP bigip_facts: server: "{{ inventory_hostname }}" user: "admin" password: "admin" include: "system_info,software,self_ip" - name: Mention the Self IP debug: msg: "I have self IP {{ self_ip['/Common/net1'].address }}" - name: Mention the software debug: msg: "I have software version {{ software[0].version }}" And the (truncated) output ... TASK: [Mention the Self IP] *************************************************** ok: [big-ip01.internal] => { "msg": "I have self IP 10.2.2.2" } TASK: [Mention the software] ************************************************** ok: [big-ip01.internal] => { "msg": "I have software version 11.6.0" } ... bigip_node This module allows you to manipulate nodes in a BIG-IP. Nodes are logical objects on your BIG-IP that identify the IP address of a physical node on your network. In terms of what we can do with them with the existing BIG-IP modules, you can use this module to create nodes that you can later assign to pool members. First, let's show you the playbook that I'm going to run. - name: Node shenanigans hosts: test connection: local tasks: - name: Add a new node bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "10.2.1.1" name: "member1" This is a simple example of creating a node in my BIG-IP. As is probably apparent if I am going to be working with pools, I would want to create many nodes. I'll hold off on that until a future example, but hopefully this clarifies the point of how to create the object that we can later assign to pool members. Let's run it! ansible-playbook -i hosts site.yaml And the output we should expect to see looks like this root@debian:~# ansible-playbook -i hosts site.yaml PLAY [Node shenanigans] ******************************************************* TASK: [Add a new node] ******************************************************** changed: [big-ip01.internal] PLAY RECAP ******************************************************************** big-ip01.internal : ok=1 changed=1 unreachable=0 failed=0 root@debian:~# Now, just to clarify what gets dropped where, and where these nodes can be used, let's first look at the Local Traffic > Nodes screen. Now, let's navigate over to the pool screen and try to create a new pool. On the new pool creation screen, we have the option of specifying members of that pool. If we click on Node List, well, look at that. Our new node. bigip_pool This module allows you to create pools on your BIG-IPs. To these pools we can later add pool members. With this and the bigip_pool_member module, you can control the basic load balancing functionality of the BIG-IP. First, just to prove that I have nothing up my BIG-IP sleeve, here's my current pool list. And now, for my playbook - name: Create a pool hosts: test connection: local tasks: - name: Create the pool1 pool bigip_pool: server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "pool1" With a wave of my magic wand... root@debian:~# ansible-playbook -i hosts site.yaml PLAY [Create a pool] ******************************************************* TASK: [Create the pool1 pool] *********************************** ... changed: [big-ip01.internal] => {"changed": true} PLAY RECAP ******************************************************************** big-ip01.internal : ok=1 changed=1 unreachable=0 failed=0 root@debian:~# ...and my pool list has been changed. The bigip_pool module has a number of other options that let you adjust settings of the pool. They are all documented on the module's page. bigip_pool_member With the bigip_pool_member module, you can manipulate the members of any of the pools on your BIG-IP. This module, in particular, is a crucial part of a rolling upgrade strategy that you may undertake when upgrading software on members of the pool. Consider the following scenario. Remove (or disable) the member from the pool Upgrade the member's software (you pick the software, but let's say Apache for example) Verify the software upgrade Add (or enable) the member back to the old pool, or, add the member to a new pool This module can be used for steps 1 and 4. Let's have a look at my playbook. - name: Pool member manipulation hosts: test connection: local tasks: - name: Drop members out of pool1 bigip_pool_member: server: "{{ inventory_hostname }}" user: "admin" password: "admin" state: "absent" host: "{{ item }}" pool: "pool1" port: "22" with_items: - member1 - member2 - name: Intermediate processing debug: msg: "Upgrade some software" - name: More intermediate processing debug: msg: "Install some configuration" - name: Add member1 node back bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "{{ item }}" host: "10.2.1.1" state: "present" with_items: - member1 - name: Add member2 node back bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "{{ item }}" host: "10.2.1.2" state: "present" with_items: - member2 - name: Add member1 back to pool1 bigip_pool_member: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "{{ item }}" pool: "pool1" port: "22" state: "present" with_items: - member1 - name: Add member2 to pool2 bigip_pool_member: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "{{ item }}" pool: "pool2" port: "22" state: "present" with_items: - member2 And before we run it, let's just take a peek at my current pools. As well as the members of pool1 And the members of pool2 Now, let's have a go at running the playbook. ansible-playbook -i hosts site2.yaml After it runs, you should have output that resembles something like this. root@debian:~# ansible-playbook -i hosts site.yaml PLAY [Pool member manipulation] *********************************************** TASK: [Drop members out of pool1] ********************************************* changed: [big-ip01.internal] => (item=member1) ok: [big-ip01.internal] => (item=member2) TASK: [Intermediate processing] *********************************************** ok: [big-ip01.internal] => { "msg": "Upgrade some software" } TASK: [More intermediate processing] ****************************************** ok: [big-ip01.internal] => { "msg": "Install some configuration" } TASK: [Add member1 node back] ************************************************* changed: [big-ip01.internal] => (item=member1) TASK: [Add member2 node back] ************************************************* ok: [big-ip01.internal] => (item=member2) TASK: [Add member1 back to pool1] ********************************************* changed: [big-ip01.internal] => (item=member1) TASK: [Add member2 to pool2] ************************************************** ok: [big-ip01.internal] => (item=member2) PLAY RECAP ******************************************************************** big-ip01.internal : ok=7 changed=3 unreachable=0 failed=0 root@debian:~# And your pool members should be different, check out the screenshots below. The pool list Members in pool1 Members in pool2 To Summarize BIG-IP would not have a presence in Ansible if it were not for Matt and Serge's initiative. Based on the work they started, I'm happy to assist with testing and contributing new modules moving forward. In my entirely too biased opinion, Ansible fits in well with the tools and products I work on. Perhaps it will also fit in well with your workflow. If we've piqued your interest, then the Ansible mailing list is a great place to ask more Ansible related questions and further solidify your understanding. If you liked this article and want to see more like it, rate it as such. Have comments? Questions? Something else? Leave a comment below! References http://docs.ansible.com/ansible/bigip_facts_module.html http://docs.ansible.com/ansible/bigip_node_module.html http://docs.ansible.com/ansible/bigip_pool_module.html http://docs.ansible.com/ansible/bigip_pool_member_module.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_configuration_guide_10_0_0/ltm_nodes.html#1188710 https://groups.google.com/forum/#!forum/ansible-project1.5KViews0likes14CommentsF5 Automation with Ansible Tips and Tricks
Getting Started with Ansible and F5 In this article we are going to provide you with a simple set of videos that demonstrate step by step how to implement automation with Ansible. In the last video, however we will demonstrate how telemetry and automation may be used in combination to address potential performance bottlenecks and ensure application availability. To start, we will provide you with details on how to get started with Ansible automation using the Ansible Automation Platform®: Backing up your F5 device Once a user has installed and configured Ansible Automation Platform, we will now transition to a basic maintenance function – an automated backup of a BIG-IP hardware device or Virtual Edition (VE). This is always recommended before major changes are made to our BIG-IP devices Configuring a Virtual Server Next, we will use Ansible to configure a Virtual Server, a task that is most frequently performed via manual functions via the BIG-IP. When changes to a BIG-IP are infrequent, manual intervention may not be so cumbersome. However large enterprise customers may need to perform these tasks hundreds of times: Replace an SSL Certificate The next video will demonstrate how to use Ansible to replace an SSL certificate on a BIG-IP. It is important to note that this video will show the certificate being applied on a BIG-IP and then validated by browsing to the application website: Configure and Deploy an iRule The next administrative function will demonstrate how to configure and push an iRule using the Ansible Automation Platform® onto a BIG-IP device. Again this is a standard administrative task that can be simply automated via Ansible: Delete the Existing Virtual Server Ok so now we have to delete the above configuration to roll back to a steady state. This is a common administrative task when an application is retired. We again demonstrate how Ansible automation may be used to perform these simple administrative tasks: Telemetry and Automation: Using Threshold Triggers to Automate Tasks and Fix Performance Bottlenecks Now you have a clear demonstration as to how to utilize Ansible automation to perform routine tasks on a BIG-IP platform. Once you have become proficient with more routine Ansible tasks, we can explore more high-level, sophisticated automation tasks. In the below demonstration we show how BIG-IP administrators using SSL Orchestrator® (SSLO) can combine telemetry with automation to address performance bottlenecks in an application environment: Resources: So that is a short series of tutorials on how to perform routine tasks using automation plus a preview of a more sophisticated use of automation based upon telemetry and automatic thresholds. For more detail on our partnership, please visit our F5/Ansible page or visit the Red Hat Automation Hub for information on the F5 Ansible certified collections. https://www.f5.com/ansible https://www.ansible.com/products/automation-hub https://galaxy.ansible.com/f5networks/f5_modules5.7KViews2likes1CommentUsing CryptoNice as a Sanity-Checking Tool for Automated Application Deployments with Ansible
Any good automation pipeline should have some validation built in to perform sanity checks on what it should have done. This article describes how you can use CryptoNice as a simple and easy way of sanity checking the SSL/TLS configuration of your automated application deployments, by integrating CryptoNice into your pipeline directly after your automation solution has deployed the application. I am going to show a simple Ansible playbook that illustrates this point. The Ansible playbook deploys an application to a BIG-IP that resides in AWS, but it could be any BIG-IP on premises or in any cloud. After the deployment is complete, I use CryptoNice to validate that the application is reachable via TLS and that the deployed VIP is using best practices for SSL deployments. What Is CryptoNice? CryptoNice is both a command line tool and Python library that is developed by F5 Labs and is publicly available; it provides the ability to scan and report on the configuration of SSL/TLS for your internet or internal-facing web services. Built using the sslyze API and SSL, http-client, and DNS libraries, CryptoNice collects data on a given domain and performs a series of tests to check TLS configuration. You can get CryptoNice here:https://github.com/F5-Labs/cryptonice What Is Ansible? Ansible is an open-source software-provisioning, configuration-management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows and also F5 BIG-IPs. You can learn how to install Ansible here:https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html F5 publishes instructions on how to integrate the Ansible plugins here:https://clouddocs.f5.com/products/orchestration/ansible/devel/ In this article I use the published version 1 F5 Ansible plugin that uses the iControl REST API. If we take a look briefly under the hood here, this plugin is using an imperative API. As of today, F5 has a version 2 plugin in preview that focuses on managing F5 BIG-IP/BIG-IQ through declarative APIs such asAS3, DO, TS, and CFE. You can also check the version 2 plugin preview out on F5 Cloud Docs. I used a stock Ubuntu image as a basis for my Ansible server and followed the instructions for installing Ansible on Ubuntu, then followed the instructions for installing the F5 plugin for Ansible referenced above. You can take a look at the F5/Ansible 1.0 plugin that is published on the Ansible Galaxy Hub here:https://galaxy.ansible.com/f5networks/f5_modules For my demonstration, I also installed CryptoNice on the same server where Ansible and the F5 Ansible plugin are installed; at that point you are off to the races and you can build a simple Ansible script and begin to automate a BIG-IP. The F5 Ansible Module provides you with a great deal of programmability for the BIG-IP basic onboarding, WAF, APM, and LTM. The following is a great reference to give you an idea of the scope of capabilities with Ansible examples that this module provides. You can also automate the infrastructure creation if you so choose, meaning you could stand up a BIG-IP in AWS and then configure everything required to get the device onboarded, and then after that, configure traffic-management objects like VIPs/pools, etc. https://clouddocs.f5.com/products/orchestration/ansible/devel/modules/module_index.html For my test, I created a simple Ansible script that automates the creation of a VIP with an iRule to respond to http get requests. The example also associates a pool and pool members to the VIP to give you an idea of what a simple application deployment may look like. After that, I run a CryptoNice to test the quality of the SSL/TLS. My simple Ansible playbook looks like this: --- - name: Create a VIP, pool and pool members hosts: f5 connection: local vars: provider: password: notmypassword server: f5cove.me user: auser validate_certs: no server_port: 8443 tasks: - name: Create a pool bigip_pool: provider: "{{ provider }}" lb_method: ratio-member name: examplepool slow_ramp_time: 120 delegate_to: localhost - name: Add members to pool bigip_pool_member: provider: "{{ provider }}" description: "webserver {{ item.name }}" host: "{{ item.host }}" name: "{{ item.name }}" pool: examplepool port: '80' with_items: - host: 10.0.0.68 name: web01 - host: 10.0.0.67 name: web02 delegate_to: localhost - name: Create a VIP bigip_virtual_server: provider: "{{ provider }}" description: avip destination: 10.0.0.66 name: vip-1 irules: - responder pool: examplepool port: '443' snat: Automap profiles: - http - f5cove delegate_to: localhost - name: Create Ridirect bigip_virtual_server: provider: "{{ provider }}" description: avipredirect destination: 10.0.0.66 name: vip-1-redirect irules: - _sys_https_redirect port: '80' snat: None profiles: - http delegate_to: localhost - name: play cryptonice nicely hosts: 127.0.0.1 connection: local tasks: - pause: seconds: 30 - name: run cryptonice shell: "cryptonice f5cove.me" register: output - debug: var=output.stdout_lines My output from running the playbook looks like this: PLAY [Create a VIP, pool and pool members] ************************************************************************************************************************************************************************************************* TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************* ok: [1.2.3.4] TASK [Create a pool] *********************************************************************************************************************************************************************************************************************** ok: [1.2.3.4 -> localhost] TASK [Add members to pool] ***************************************************************************************************************************************************************************************************************** ok: [1.2.3.4 -> localhost] => (item={'host': '10.0.0.68', 'name': 'web01'}) ok: [1.2.3.4 -> localhost] => (item={'host': '10.0.0.67', 'name': 'web02'}) TASK [Create a VIP] ************************************************************************************************************************************************************************************************************************ changed: [1.2.3.4 -> localhost] TASK [Create Ridirect] ********************************************************************************************************************************************************************************************************************* ok: [1.2.3.4 -> localhost] PLAY [play cryptonice nicely] ************************************************************************************************************************************************************************************************************** TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************* ok: [localhost] TASK [pause] ******************************************************************************************************************************************************************************************************************************* Pausing for 30 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) ok: [localhost] TASK [run cryptonice] ********************************************************************************************************************************************************************************************************************** changed: [localhost] TASK [debug] ******************************************************************************************************************************************************************************************************************************* ok: [localhost] => { "output.stdout_lines": [ "Pre-scan checks", "-------------------------------------", "Scanning f5cove.me on port 443...", "Analyzing DNS data for f5cove.me", "Fetching additional records for f5cove.me", "f5cove.me resolves to 34.217.132.104", "34.217.132.104:443: OPEN", "TLS is available: True", "Connecting to port 443 using HTTPS", "Queueing TLS scans (this might take a little while...)", "Looking for HTTP/2", "", "", "RESULTS", "-------------------------------------", "Hostname:\t\t\t f5cove.me", "", "Selected Cipher Suite:\t\t ECDHE-RSA-AES128-GCM-SHA256", "Selected TLS Version:\t\t TLS_1_2", "", "Supported protocols:", "TLS 1.2:\t\t\t Yes", "TLS 1.1:\t\t\t Yes", "TLS 1.0:\t\t\t Yes", "", "TLS fingerprint:\t\t 29d29d15d29d29d21c29d29d29d29d930c599f185259cdd20fafb488f63f34", "", "", "", "CERTIFICATE", "Common Name:\t\t\t f5cove.me", "Issuer Name:\t\t\t R3", "Public Key Algorithm:\t\t RSA", "Public Key Size:\t\t 2048", "Signature Algorithm:\t\t sha256", "", "Certificate is trusted:\t\t False (Mozilla not trusted)", "Hostname Validation:\t\t OK - Certificate matches server hostname", "Extended Validation:\t\t False", "Certificate is in date:\t\t True", "Days until expiry:\t\t 29", "Valid From:\t\t\t 2021-02-12 23:28:14", "Valid Until:\t\t\t 2021-05-13 23:28:14", "", "OCSP Response:\t\t\t Successful", "Must Staple Extension:\t\t False", "", "Subject Alternative Names:", "\t f5cove.me", "", "Vulnerability Tests:", "No vulnerability tests were run", "", "HTTP to HTTPS redirect:\t\t False", "None", "", "RECOMMENDATIONS", "-------------------------------------", "HIGH - TLSv1.0 Major browsers are disabling TLS 1.0 imminently. Carefully monitor if clients still use this protocol. ", "HIGH - TLSv1.1 Major browsers are disabling this TLS 1.1 immenently. Carefully monitor if clients still use this protocol. ", "Low - CAA Consider creating DNS CAA records to prevent accidental or malicious certificate issuance.", "", "Scans complete", "-------------------------------------", "Total run time: 0:00:05.688562" ] } Note that after the playbook is complete, CryptoNice is telling me that I need to improve the quality of my SSL profile on my BIG-IP. As a result, I create a second SSL profile and Disable TLS 1.0 and 1.1. Upgrade the certificate from a 2048 bit key to a 4096 bit key. Make sure that the certificate bundle is configured correctly to ensure that the certificate is properly trusted. Create a cipher rule and group and use the following cipher string to improve the quality of the SSL connections: !EXPORT:!DHE+AES-GCM:!DHE+AES:ECDHE+AES-GCM:ECDHE+AES:RSA+AES-GCM:RSA+AES:-MD5:-SSLv3:-RC4:!3DES I then alter the playbook to reference the new SSL profile, and then re-run the Ansible playbook. Here is the relevant playbook snippet: - name: Create a VIP bigip_virtual_server: provider: "{{ provider }}" description: avip destination: 10.0.0.66 name: vip-1 irules: - responder pool: examplepool port: '443' snat: Automap profiles: - http - f5cove <change this to the new SSL Profile> delegate_to: localhost The resulting output from CryptoNice after running the playbook again to switch the SSL profile addresses all of the HIGH recommendations. TASK [debug] ********************************************************************************************************************************************************************** ok: [localhost] => { "output.stdout_lines": [ "Pre-scan checks", "-------------------------------------", "Scanning f5cove.me on port 443...", "Analyzing DNS data for f5cove.me", "Fetching additional records for f5cove.me", "f5cove.me resolves to 34.217.132.104", "34.217.132.104:443: OPEN", "TLS is available: True", "Connecting to port 443 using HTTPS", "Queueing TLS scans (this might take a little while...)", "Looking for HTTP/2", "", "", "RESULTS", "-------------------------------------", "Hostname:\t\t\t f5cove.me", "", "Selected Cipher Suite:\t\t ECDHE-RSA-AES256-GCM-SHA384", "Selected TLS Version:\t\t TLS_1_2", "", "Supported protocols:", "TLS 1.2:\t\t\t Yes", "", "TLS fingerprint:\t\t 2ad2ad0002ad2ad0002ad2ad2ad2adcb09dd549309271837f87ac5dad15fa7", "", "", "HTTP/2 supported:\t\t False", "", "", "CERTIFICATE", "Common Name:\t\t\t f5cove.me", "Issuer Name:\t\t\t R3", "Public Key Algorithm:\t\t RSA", "Public Key Size:\t\t 4096", "Signature Algorithm:\t\t sha256", "", "Certificate is trusted:\t\t True (No errors)", "Hostname Validation:\t\t OK - Certificate matches server hostname", "Extended Validation:\t\t False", "Certificate is in date:\t\t True", "Days until expiry:\t\t 88", "Valid From:\t\t\t 2021-04-13 00:55:09", "Valid Until:\t\t\t 2021-07-12 00:55:09", "", "OCSP Response:\t\t\t Successful", "Must Staple Extension:\t\t False", "", "Subject Alternative Names:", "\t f5cove.me", "", "Vulnerability Tests:", "No vulnerability tests were run", "", "HTTP to HTTPS redirect:\t\t False", "None", "", "RECOMMENDATIONS", "-------------------------------------", "Low - CAA Consider creating DNS CAA records to prevent accidental or malicious certificate issuance.", "", "Scans complete", "-------------------------------------", "Total run time: 0:00:05.638186" ] } Now you can see in the CryptoNice output that There are no trust issues with the certificate/certificate bundle. I have upgraded from 2048 bits to 4096 bit key length. I have disabled TLS1.0 and TLS1.1. The connection is handshaking with a more secure cryptographic algorithm. I no longer have any HIGH recommendations for improvement to the quality of TLS connection. Conclusion It is always best practice to perform a sanity check on the services that you create in your automation pipelines. This article describes using CryptoNice as part of a simple Ansible playbook to run an SSL/TLS sanity check on the application in order to improve the security of your application-delivery automation. Ultimately you should not just be checking the SSL/TLS; another good idea would be to introduce application-vulnerability scanners too as part of the automation pipeline.1.3KViews2likes1CommentPower of tmsh commands using Ansible
Why is data important Having accurate data has become an integral part of decision making. The data could be for making simple decisions like purchasing the newest electronic gadget in the market or for complex decisions on what hardware and/or software platform works best for your highly demanding application which would provide the best user experience for your customer. In either case research and data collection becomes essential. Using what kind of F5 hardware and/or software in your environment follows the same principals where your IT team would require data to make the right decision. Data could vary from CPU, Throughput and/or Memory utilization etc. of your F5 gear. It could also be data just for a period of a day, a month or a year depending the application usage patterns. Ansible to the rescue Your environment could have 10's or maybe 100 or even 1000's of F5 BIG-IP's in your environment, manually logging into each one to gather data would be a highly inefficient method. One way which is a great and simple way could be to use Ansible as an automation framework to perform this task, relieving you to perform your other job functions. Let's take a look at some of the components needed to use Ansible. An inventory file in Ansible defines the hosts against which your playbook is going to run. Below is an example of a file defining F5 hosts which can be expanded to represent your 10'/100's or 1000's of BIG-IP's. Inventory file: 'inventory.yml' [f5] ltm01 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm02 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm03 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm04 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm05 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 A playbook defines the tasks that are going to be executed. In this playbook we are using the bigip_command module which can take as input any BIG-IP tmsh command and provide the output. Here we are going to use the tmsh commands to gather performance data from the BIG-IP's. The output from each of the BIG-IP's is going to be stored in a file that can be referenced after the playbook finished execution. Playbook: 'performance-data/yml' --- - name: Create empty file hosts: localhost gather_facts: false tasks: - name: Creating an empty file file: path: "./{{filename}}" state: touch - name: Gather stats using tmsh command hosts: f5 connection: local gather_facts: false serial: 1 tasks: - name: Gather performance stats bigip_command: provider: server: "{{server}}" user: "{{user}}" password: "{{password}}" server_port: "{{server_port}}" validate_certs: "{{validate_certs}}" commands: - show sys performance throughput historical - show sys performance system historical register: result - lineinfile: line: "\n###BIG-IP hostname => {{ inventory_hostname }} ###\n" insertafter: EOF dest: "./{{filename}}" - lineinfile: line: "{{ result.stdout_lines }}" insertafter: EOF dest: "./{{filename}}" - name: Format the file shell: cmd: sed 's/,/\n/g' ./{{filename}} > ./{{filename}}_formatted - pause: seconds: 10 - name: Delete file hosts: localhost gather_facts: false tasks: - name: Delete extra file created (delete file) file: path: ./{{filename}} state: absent Execution: The execution command will take as input the playbook name, the inventory file as well as the filename where the output will be stored. (There are different ways of defining and passing parameters to a playbook, below is one such example) ansible-playbook performance_data.yml -i inventory.yml --extra-vars "filename=perf_output" Snippet of expected output: ###BIG-IP hostname => ltm01 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service223.8K258.8K279.2K297.4K112.5K' 'In212.1K209.7K210.5K243.6K89.5K' 'Out21.4K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7982836362' 'In4140403432' 'Out4140403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization1718181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] ###BIG-IP hostname => ltm02 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service202.3K258.7K279.2K297.4K112.5K' 'In190.8K209.7K210.5K243.6K89.5K' 'Out19.6K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7782836362' 'In3940403432' 'Out3740403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization2118181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] The data obtained is historical data over a period of time. Sometimes it is also important to gather the peak usage of throughout/memory/cpu over time and not the average. Stay tuned as we will discuss on how to obtain that information in a upcoming article. Conclusion Use the output of the data to learn the traffic patterns and propose the most appropriate BIG-IP hardware/software in your environment. This could be data collected directly in your production environment or a staging environment, which would help you make the decision on what purchasing strategy gives you the most value from your BIG-IP's. For reference: https://www.f5.com/pdf/products/big-ip-local-traffic-manager-ds.pdf The above is one example of how you can get started with using Ansible and tmsh commands. Using this method you can potentially achieve close to 100% automation on the BIG-IP.11KViews4likes3CommentsProtecting Critical Apps against EastWest Attack
In the previous article, we explained how NetSecOps and DevSecOps could manage their application security policies to prevent advanced attacks from external organization networks. But in advanced persistent hacking, hackers sometimes exploit application vulnerabilities and use advanced malware with phishing emails to the operators. This is an old technique but still valid and utilized by many APT (Advanced Persistent Threat) Hacking Groups. And if the advanced hackers obtain a DevOps operator's ID and password using the malware, they could access a Kubernetes or OpenShift cluster through the normal login process and easily bypass advanced WAF(Web Application Firewall) solutions deployed in front of the cluster. Once the attacker can get a user ID and password of the Kubernetes or OpenShift cluster, the attacker also can access each application that is running inside of the cluster. Since most people on the SecOps team normally install very basic security functions inside the Kubernetes or OpenShift cluster, the hacker who logged in to the cluster can attack other applications in the same cluster without any security barrier. F5 Container Ingress Service is not designed to stop these sort of attacks within the cluster. To overcome this challenge, we have another tool, NGINX App Protect. NGINX App Protect delivers Layer 7 visibility and granular control for the applications while enabling an advanced application security policies. With an NGINX App Protect deployment, DevSecOps can ensure only legitimate traffic is allowed while all other unwanted traffic is blocked. NGINX App Protect can monitor the traffic traversing namespace boundaries between pods and provide advanced application protection at layer 7 for East-West traffic. Solution Overview This article will cover how NGINX App Protect can protect the critical applications in an OpenShift environment against an attack originating within the same cluster. Detecting advanced application attacks inside the cluster is beneficial for the DevSecOps team but this can increase the complexity of security operations. To provide a certain level of protection for the critical application the NGINX App Protect instance should be installed as a ‘PoD Proxy’ or a ‘Service Proxy’ for the application. This means the customer may need multiple NGINX App Protect instances to have the required level of protection for their applications. On the face of it this might seem like a dramatic increase in the complexity of security related operations. Security automation is the recommended solution to overcome the increased complexity of this security operations challenge. In this use case, we use Red Hat Ansible as our security automation tool. With Red Hat Ansible, the user can automate their incident response process with their existing security solutions. This can dramatically reduce the security team’s response time from hours to minutes. We use Ansible and Elasticsearch to provide all the required ‘security automation’ processes in this demo. With all these combined technologies, the solution provides WAFprotection for the critical applicationsdeployed in the OpenShift cluster. Once it detects the application-based attack from the same cluster subnet, it immediately blocks the attack and deletes the compromised pod with a pre-defined security automation playbook. The workflow is organized as shown below: The malware of 'Phishing email' infects the developer's laptop. The attacker steals the ID/PW of the developer using the malware. In this demo, the stolen ID is 'dev_user.' The attacker logs in the 'Test App' on the 'dev-test01' namespace, owned by the 'dev_user'. The attacker starts the network-scanning process on the internal subnet of the OpenShift cluster. And the attacker finds the 'critical-app' application pod. The attacker starts the web-based attack against 'critical-app'. NGINX App Protect protects the 'critical-app'; thus, the attack traffic is blocked immediately. NGINX exports the alert details to the external Elasticsearch. If this specific alert meets a pre-defined condition, Elasticsearch will trigger the pre-defined Ansible playbook. Ansible playbook accesses OpenShift and deletes the compromised 'Test App’ pod automatically. *Since this demo focuses on an attack inside the OpenShift cluster, the demo does not include the 'Step#1' and 'Step#2' (Phishing email). Understanding of the ‘Security Automation’ process The ‘Security Automation’ is the key part of this demo because the organizations don’t want to respond to each WAF alert manually, one by one. Manual incident-response processes are a time-consuming job and inefficient, especially in a modern-app environment with hundreds of container-based applications. In this demo, Red Hat Ansible and Elasticsearch take the security automation. Below is the brief workflow of the security automation of this use case. In this use case, the F5 Advanced WAF has been deployed in front of the OpenShift cluster and has inserted the X-Forwarded-For header value at each session. Since F5 Advanced WAF inserts the X-Forwarded-For header into the packet that comes from the external, if the packet doesn’t include the X-Forwarded-For header, it is likely coming from the internal network. NGINX App Protectinstalled as a pod proxy’ with the critical application we want to protect. Because NGINX App Protect runs as a pod proxy, all the traffic must be sent through this to reach the ‘critical-application.’ If the NGINX App Protect detects any malicious activities, it sends the alert details to the external Elasticsearch System. When any new alerts come from the NGINX App Protect, Elasticsearch analyzes the details of the alerts. If the alert meets the below conditions, Elasticsearch triggers the notification to the Logstash. If the source IP address of the alert is a part of the OpenShift cluster subnet… If the WAF alert severity is Critical… Once the Logstash system receives the notification from the Elasticsearch, it creates the ip.txt file, which includes the source IP address of the attack and executes the pre-defined Ansible playbook. Ansible playbook reads the ip.txt file and extracts the IP address from the file. And Ansible accesses the OpenShift and finds the compromised pod using that Source IP Address from the ip.txt file. Then Ansible deletes the compromised pod and ip.txt files automatically. Creates Ansible Playbook Red Hat Ansible is the automation tool that enables network and security automation for users with enterprise-ready functions. F5 and Red Hat have a strategic partnership and deliver the joint use cases for our customer base. With Ansible integration with F5 solutions, organizations can have the single pane of glass management for network and security automation. In this use-case, we implement an automated security response process with the Ansible playbook when the F5 NGINX App Protect detects malicious activities in the OpenShift cluster. Below is the Ansible playbook to execute the incident response process for the attacker's compromised pod. ansible_ocp.yaml --- - hosts: localhost gather_facts: false tasks: - name: Login to OCP cluster k8s_auth: host: https://yourocpdomain:6443 username: kubeadmin password: your_ocp_password validate_certs: no register: k8s_auth_result - name: Extract IP Address command: cat /yourpath/ip.txt register: badpod_ip - name: Extract App Label from OpenShift shell: | sudo oc get pods -A -o json --field-selector status.podIP={{ badpod_ip.stdout }} | grep "\"app\":" | awk '{print $2}' | sed 's/,//' register: app_label - name: Delete Malicious Deployments shell: | sudo oc delete all --selector app={{ app_label.stdout }} -A register: delete_pod - name: Delete IP and Info File command: rm -rf /yourpath/ip.txt - name: OCP Service Deletion Completed debug: msg: "{{ delete_pod.stdout }}" Configuring Elasticsearch Watcher and Logstash To trigger the Ansible playbook for the Security Automation, SOC analysts need to validate the alert from the NGINX App Protect first. And based on the difference of the alert details, the SOC analyst might want to execute a different playbook. For example, if the alert is related to a Credential Stuffing Attack, the SOC analysts may want to block the user's application access. But if the alert is related to the known IP Blacklist, the analyst might want to block that IP address in the firewall. To support these requirements, the security team needs to have a tool that can monitor the security alerts and trigger the required actions based on them. Elasticsearch Watcher is the feature of the commercial version of Elasticsearch that users can use to create actions based on conditions, which are periodically evaluated using queries on the data. Configuring the Watcher of Kibana * You need an Elastic Platinum license or Eval license to use this feature on the Kibana. * Go to Kibana UI. * Management -> Watcher -> Create -> Create advanced watcher * Copy and paste below JSON code watcher_ocp.json { "trigger": { "schedule": { "interval": "1m" } }, "input": { "search": { "request": { "search_type": "query_then_fetch", "indices": [ "nginx-*" ], "rest_total_hits_as_int": true, "body": { "query": { "bool": { "must": [ { "match": { "outcome_reason": "SECURITY_WAF_VIOLATION" } }, { "match": { "x_forwarded_for_header_value": "N/A" } }, { "range": { "@timestamp": { "gte": "now-1h", "lte": "now" } } } ] } } } } } }, "condition": { "compare": { "ctx.payload.hits.total": { "gt": 0 } } }, "actions": { "logstash_logging": { "webhook": { "scheme": "http", "host": "localhost", "port": 1234, "method": "post", "path": "/{{watch_id}}", "params": {}, "headers": {}, "body": "{{ctx.payload.hits.hits.0._source.ip_client}}" } }, "logstash_exec": { "webhook": { "scheme": "http", "host": "localhost", "port": 9001, "method": "post", "path": "/{{watch_id}}", "params": {}, "headers": {}, "body": "{{ctx.payload.hits.hits[0].total}}" } } } } 2. Configuring 'logstash.conf' file. Below is the final version of the 'logstash.conf' file. Please note that you have to start the logstash with 'sudo' privilege logstash.conf input { syslog { port => 5003 type => nginx } http { port => 1234 type => watcher1 } http { port => 9001 type => ansible1 } } filter { if [type] == "nginx" { grok { match => { "message" => [ ",attack_type=\"%{DATA:attack_type}\"", ",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"", ",date_time=\"%{DATA:date_time}\"", ",dest_port=\"%{DATA:dest_port}\"", ",ip_client=\"%{DATA:ip_client}\"", ",is_truncated=\"%{DATA:is_truncated}\"", ",method=\"%{DATA:method}\"", ",policy_name=\"%{DATA:policy_name}\"", ",protocol=\"%{DATA:protocol}\"", ",request_status=\"%{DATA:request_status}\"", ",response_code=\"%{DATA:response_code}\"", ",severity=\"%{DATA:severity}\"", ",sig_cves=\"%{DATA:sig_cves}\"", ",sig_ids=\"%{DATA:sig_ids}\"", ",sig_names=\"%{DATA:sig_names}\"", ",sig_set_names=\"%{DATA:sig_set_names}\"", ",src_port=\"%{DATA:src_port}\"", ",sub_violations=\"%{DATA:sub_violations}\"", ",support_id=\"%{DATA:support_id}\"", ",unit_hostname=\"%{DATA:unit_hostname}\"", ",uri=\"%{DATA:uri}\"", ",violation_rating=\"%{DATA:violation_rating}\"", ",vs_name=\"%{DATA:vs_name}\"", ",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\"", ",outcome=\"%{DATA:outcome}\"", ",outcome_reason=\"%{DATA:outcome_reason}\"", ",violations=\"%{DATA:violations}\"", ",violation_details=\"%{DATA:violation_details}\"", ",request=\"%{DATA:request}\"" ] } break_on_match => false } mutate { split => { "attack_type" => "," } split => { "sig_ids" => "," } split => { "sig_names" => "," } split => { "sig_cves" => "," } split => { "sig_set_names" => "," } split => { "threat_campaign_names" => "," } split => { "violations" => "," } split => { "sub_violations" => "," } remove_field => [ "date_time", "message" ] } if [x_forwarded_for_header_value] != "N/A" { mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}} } else { mutate { add_field => { "source_host" => "%{ip_client}"}} } geoip { source => "source_host" database => "/etc/logstash/GeoLite2-City.mmdb" } } } output { if [type] == 'nginx' { elasticsearch { hosts => ["127.0.0.1:9200"] index => "nginx-%{+YYYY.MM.dd}" } } if [type] == 'watcher1' { file { path => "/yourpath/ip.txt" codec => line { format => "%{message}"} } } if [type] == 'ansible1' { exec { command => "ansible-playbook /yourpath/ansible_ocp.yaml" } } } Simulate the demo You should start the Kibana watcher and logstash services first before proceeding with this step. Kubeadmin Console Please make sure you're logged in to the OCP cluster using a cluster-admin account. And confirm the 'critical-app' is running correctly. j.lee$ oc whoami kube:admin j.lee$ j.lee$ oc get projects NAME DISPLAY NAME STATUS critical-app Active default Active dev-test02 Active kube-node-lease Active kube-public Active kube-system Active openshift Active openshift-apiserver Active openshift-apiserver-operator Active openshift-authentication Active openshift-authentication-operator Active openshift-cloud-credential-operator Active j.lee$ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES critical-app-v1-5c6546765f-wjhl9 2/2 Running 1 85m 10.129.2.71 ip-10-0-180-68.ap-southeast-1.compute.internal <none> <none> j.lee$ dev_user Console Please make sure you're logged in to the OCP cluster using 'dev_user' account on the compromised pod and confirm the 'dev-test-app' is running correctly. PS C:\Users\ljwca\Documents\ocp> oc whoami dev_user PS C:\Users\ljwca\Documents\ocp> PS C:\Users\ljwca\Documents\ocp> oc get projects NAME DISPLAY NAME STATUS dev-test02 Active PS C:\Users\ljwca\Documents\ocp> PS C:\Users\ljwca\Documents\ocp> oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dev-test-v1-674f467644-t94dc 1/1 Running 0 6s 10.128.2.38 ip-10-0-155-159.ap-southeast-1.compute.internal <none> <none> 2. Login to 'dev-test' container using remote shell command of the OCP PS C:\Users\ljwca\Documents\ocp> oc rsh dev-test-v1-674f467644-t94dc $ $ uname -a Linux dev-test-v1-674f467644-t94dc 4.18.0-193.14.3.el8_2.x86_64 #1 SMP Mon Jul 20 15:02:29 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux 3. Network scanning This step takes 1~2 hours to complete all scanning. $ nmap -sP 10.128.0.0/14 Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-29 17:20 UTC Nmap scan report for ip-10-128-0-1.ap-southeast-1.compute.internal (10.128.0.1) Host is up (0.0025s latency). Nmap scan report for ip-10-128-0-2.ap-southeast-1.compute.internal (10.128.0.2) Host is up (0.0024s latency). Nmap scan report for 10-128-0-3.metrics.openshift-authentication-operator.svc.cluster.local (10.128.0.3) Host is up (0.0023s latency). Nmap scan report for 10-128-0-4.metrics.openshift-kube-scheduler-operator.svc.cluster.local (10.128.0.4) Host is up (0.0027s latency). . . . After completion of the scanning, you will be able to find the 'critical-app' on the list. 4. Application Scanning for the target You can find the open service ports on the target using nmap. $ nmap 10.129.2.71 Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-29 17:23 UTC Nmap scan report for 10-129-2-71.critical-app.critical-app.svc.cluster.local (10.129.2.71) Host is up (0.0012s latency). Not shown: 998 closed ports PORT STATE SERVICE 80/tcp open http 8888/tcp open sun-answerbook Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds $ But you will see the 403 error when you try to access the server using port 80. This happens because the default Apache access control only allows the traffic from the NGINX App Protect. $ curl http://10.129.2.71/ <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access this resource.</p> <hr> <address>Apache/2.4.46 (Debian) Server at 10.129.2.71 Port 80</address> </body></html> $ Now, you can see the response through port 8888. $ curl http://10.129.2.71:8888/ <html> <head> <title> Network Operation Utility - NSLOOKUP </title> </head> <body> <font color=blue size=12>NSLOOKUP TOOL</font><br><br> <h2>Please type the domain name into the below box.</h2> <h1> <form action="/index.php" method="POST"> <p> <label for="target">DNS lookup:</label> <input type="text" id="target" name="target" value="www.f5.com"> <button type="submit" name="form" value="submit">Lookup</button> </p> </form> </h1> <font color=red>This site is vulnerable to Web Exploit. Please use this site as a test purpose only.</font> </body> </html> $ 5. Performing the Command Injection attack. $ curl -d "target=www.f5.com|cat /etc/passwd&form=submit" -X POST http://10.129.2.71:8888/index.php <html><head><title>SRE DevSecOps - East-West Attack Blocking</title></head><body><font color=green size=10>NGINX App Protect Blocking Page</font><br><br>Please consult with your administrator.<br><br>Your support ID is: 878077205548544462<br><br><a href='javascript:history.back();'>[Go Back]</a></body></html>$ $ 6. Verify the logs in Kibana dashboard You should be able to see the NGINX App Protect alerts on your Elasticsearch. You should be able to see the NGINX App Protect alerts on your ELK. 7. Verify the Ansible terminates the compromised pod Ansible deletes the compromised pod. Summary Today’s cyber based threats are getting more and more sophisticated. Attackers keep attempting to find out the weakest link in the company’s infrastructure and finally move from there to the data in the company using that link. In most cases, the weakest link of the organization is the human and the company stores its critical data in the application. This is why the attackers use the phishing email to compromise the user’s laptop and leverage it to access the application. While F5 is working very closely with our key alliance partners such as Cisco and FireEye to stop the advanced malware at the first stage, our NGINX App Protect can work as another layer of defence for the application to protect the organization's data. F5, Red Hat, and Elastic have developed this new protection mechanism, which is an automated process. This use case allows the DevSecOps team to easily deploy the advanced security layer in their OpenShift cluster. If you want tolearn moreabout this use case, please visit the F5 Business Development official Github linkhere.905Views0likes0CommentsF5 and Cisco ACI Essentials: Automate automate automate !!!
This article will focus on automation support by BIG-IP and Cisco ACI and how automation tools specifically Ansible can be used to automate different use cases. Before getting into the weeds let's discuss and understand BIG-IP's and Cisco ACI's automation strategies. BIG-IP automation strategy BIG-IP automation strategy is simple-abstract as much complexity as possible from the user, give an easy button to the user to deploy their BIG-IP configuration. This could honestly mean different methods to different people, some prefer sending a single API call to perform one action( A one-to-one mapping between your API<->Configuration). Others prefer a more declarative approach where one API call performs multiple actions, basically a one-to -many mapping between your API(1)<->Configuration(N). A great link to refresh and learn about the different options: https://www.f5.com/products/automation-and-orchestration Cisco ACI automation strategy Cisco Application Policy Infrastructure Controller (APIC) is the network controller for the ACI fabric. APIC is the unified point of automation and management for the Cisco ACI fabric, policy enforcement, and health monitoring. The Cisco ACI programmability model provides complete programmatic access using APIC. Click here to learn more https://developer.cisco.com/site/aci/ Automation tools There are a lot of automation tools that are being talked for network automation BUT the one that comes up in every customer conversation is Ansible. Its simplicity, maturity and community adoption has made it very popular. In this article we are going to focus on using Ansible to automate a service discovery use case. Use Case: Dynamic EP attach/detach Let’s take an example of a simple http web service being made highly available and secure using the BIG-IP Virtual IP address. This web service has a bunch of backend web servers hosting the application, the IP of this web servers is configured on the BIG-IP as pool members. These same web server IP’s are learned as endpoints in the ACI fabric and are part of an End Point Group (EPG) on the APIC. Hence there is a logical mapping between a EPG on APIC and a pool on the BIG-IP. Now if the application is adding or deleting web servers that is hosting the application maybe to save cost or maybe to deal with increase/decrease of traffic, what happens is that the web server IP will be automatically learned/unlearned on APIC. BUT an admin will still have to add/remove that web server IP from the pool on BIG-IP. This can be a burden on the network admin specially if this happens very often. Here is where automation can help and let’s look at how in the next section More details on the use case can be found at https://devcentral.f5.com/s/articles/F5-Cisco-ACI-Essentials-Dynamic-pool-sizing-using-the-F5-ACI-ServiceCenter Automation of Use Case: Dynamic EP attach/detach Automation can be achieved by using Ansible and Ansible tower where API calls are made directly to the BIG-IP. Another option it to use the F5 ACI ServiceCenter (a native F5 ACI integration) to automate this particular use case. Ansible and Ansible tower To learn more about Ansible and Ansible tower: https://www.ansible.com/products/tower Using this method of automation separate API calls are made directly to the ACI and the BIG-IP. Sample playbook to perform the addition and deletion of pool members to a BIG-IP pool based on members in a particular EPG. The mapping of pool to EPG is provided as input to the playbook. - name: Dynamic end point attach/dettach hosts: aci connection: local gather_facts: false vars: epg_members: [] pool_members: [] pool_members_ip: [] bigip_ip: 10.192.73.xx bigip_password: admin bigip_username: admin # Here we are mapping pool 'dynamic_pool' to EPG 'internalEPG' which belongs to APIC tenant 'TenantDemo' app_profile_name: AppProfile epg_name: internalEPG partition: Common pool_name: dynamic_pool pool_port: 80 tenant_name: TenantDemo tasks: - name: Setup provider set_fact: provider: server: "{{bigip_ip}}" user: "{{bigip_username}}" password: "{{bigip_password}}" server_port: "443" validate_certs: "false" - name: Get end points learned from End Point group aci_rest: action: "get" uri: "/api/node/mo/uni/tn-{{tenant_name}}/ap-{{app_profile_name}}/epg-{{epg_name}}.json?query-target=subtree&target-subtree-class=fvIp" host: "{{inventory_hostname}}" username: "{{ lookup('env', 'ANSIBLE_NET_USERNAME') }}" password: "{{ lookup('env', 'ANSIBLE_NET_PASSWORD') }}" validate_certs: "false" register: eps - name: Get the IP's of the servers part of the EPG set_fact: epg_members="{{epg_members + [item]}}" loop: "{{eps | json_query(query_string)}}" vars: query_string: "imdata[*].fvIp.attributes.addr" no_log: True - name: Get only the IPv4 IP's set_fact: epg_members="{{epg_members | ipv4}}" - name: Adding Pool members to the BIG-IP bigip_pool_member: provider: "{{provider}}" state: "present" name: "{{item}}" host: "{{item}}" port: "{{pool_port}}" pool: "{{pool_name}}" partition: "{{partition}}" loop: "{{epg_members}}" - name: Query BIG-IP facts bigip_device_facts: provider: "{{provider}}" gather_subset: - ltm-pools register: bigip_facts - name: "Show members belonging to pool {{pool_name}}" set_fact: pool_members="{{pool_members + [item]}}" loop: "{{bigip_facts.ltm_pools | json_query(query_string)}}" vars: query_string: "[?name=='{{pool_name}}'].members[*].name[]" - set_fact: pool_members_ip: "{{pool_members_ip + [item.split(':')[0]]}}" loop: "{{pool_members}}" - debug: "msg={{pool_members_ip}}" #If there are any membeers on the BIG-IP that are not present in the EPG,then delete them - name: Find the members to be deleted if any set_fact: members_to_be_deleted: "{{ pool_members_ip | difference(epg_members) }}" - debug: "msg={{members_to_be_deleted}}" - name: Delete Pool members from the BIG-IP bigip_pool_member: provider: "{{provider}}" state: "absent" name: "{{item}}" port: "{{pool_port}}" pool: "{{pool_name}}" preserve_node: yes partition: "{{partition}}" loop: "{{members_to_be_deleted}}" Ansible tower's scheduling feature can be used to schedule this playbook to be run every minute, every hour or once per day based on how often an application is expected to change and how important is it for the configuration on both the Cisco ACI and the BIG-IP to be in sync. F5 ACI ServiceCenter To learn more about the integration : https://www.f5.com/cisco The F5 ACI ServiceCenter is installed on the APIC controller. Here automation can be used to create the initial EPG to Pool mapping. Once the mapping is created the F5 ACI ServiceCenter handles the dynamic sizing of pools based on events generated by APIC. Events are generated when a server is learned/unlearned on an EPG which is what the F5 ACI ServiceCenter listens to and accordingly adds or removes pool members from the BIG-IP. Sample playbook to deploy the mapping configuration on the BIG-IP through the F5 ACI ServiceCenter --- - name: Deploy EPG to Pool mapping hosts: localhost gather_facts: false connection: local vars: apic_ip: "10.192.73.xx" big_ip: "10.192.73.xx" partition: "Dynamic" tasks: - name: Login to APIC uri: url: https://{{apic_ip}}/api/aaaLogin.json method: POST validate_certs: no body_format: json body: aaaUser: attributes: name: "admin" pwd: "******" headers: content_type: "application/json" return_content: yes register: cookie - debug: msg="{{cookie['cookies']['APIC-cookie']}}" - set_fact: token: "{{cookie['cookies']['APIC-cookie']}}" - name: Login to BIG-IP uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/loginbigip.json method: POST validate_certs: no body: url: "{{big_ip}}" user: "admin" password: "admin" body_format: json headers: DevCookie: "{{token}}" #The body of this request defines the mapping of Pool to EPG #Here we are mapping pool 'web_pool' to EPG 'internalEPG' which belongs to APIC tenant 'TenantDemo' - name: Deploy AS3 dynamic EP mapping uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/updateas3data.json method: POST validate_certs: no body: url: "{{big_ip}}" partition: "{{partition}}" application: "DemoApp1" json: class: Application template: http serviceMain: class: Service_HTTP virtualAddresses: - 10.168.56.100 pool: web_pool web_pool: class: Pool monitors: - http members: - servicePort: 80 serverAddresses: [] - addressDiscovery: event servicePort: 80 constants: class: Constants serviceCenterEPG: web_pool: tenant: TenantDemo application: AppProfile epg: internalEPG body_format: json status_code: - 202 - 200 headers: DevCookie: "{{token}}" return_content: yes register: complete_info - name: Get task ID of above request set_fact: task_id: "{{ complete_info.json.message.taskId}}" when: complete_info.json.code == 202 - name: Get deployment status uri: url: https://{{apic_ip}}/appcenter/F5Networks/F5ACIServiceCenter/getasynctaskresponse.json method: POST validate_certs: no body: taskId: "{{task_id}}" body_format: json headers: DevCookie: "{{token}}" return_content: yes register: result until: result.json.message.message != "in progress" retries: 5 delay: 2 when: task_id is defined - name: Display final result debug: var: result After deploying this configuration, adding/deleting pool members and making sure the configuration is in sync is the responsibility of the F5 ACI ServiceCenter. Takeaways Both methods are highly effective and usable. The choice of which one to use comes down to the operational model in your environment. Some pros and cons to help made the decision on which platform to use for automation. Ansible Tower Pros No dependency on any other tools Fits in better with bigger company automation strategy to use Ansible for ALL network automation Cons Have to manage playbook execution and scheduling using Ansible Tower If more logic is needed besides what’s described above playbooks will have to be written and maintained Execution of playbook is based on scheduling and is not event driven F5 ACI ServiceCenter Pros Only pool-epg mapping has to be deployed using automation, rest all is handled by the application User interface to view pool member to EPG mapping once deployed and view discrepancies if any Limited automation knowledge is needed, heavy lifting is being done by the application Dynamically adding/deleting pool members is event driven, as members are learned/unlearned by the F5 ACI ServiceCenter an action is taken Cons Another tool is required Customization of pool to EPG mapping is not present. Only one-to-one EPG to pool mapping is present. References Learn about the F5 ACI ServiceCenter and other Cisco integrations: https://f5.com/cisco Download the F5 ACI ServiceCenter: https://dcappcenter.cisco.com/f5-aci-servicecenter.html Lab to execute Ansible playbooks: https://dcloud.cisco.com (Lab name: F5 and Ansible )1.6KViews1like0CommentsHow to Use BIG-IQ and Ansible to Build Advanced BIG-IP Automation Workflows
It’s no secret that automation of networking, security, and application development processes offers a laundry list of benefits—reduced deployment time, lowered cost, fewer errors, and more resilient systems, to name a few. One of the most popular tools for building automation workflows is Ansible. Ansible is a powerful, open-source tool that simplifies and automates many common tasks and enables infrastructure as code for creating, deploying, and managing F5 application delivery and security services. This is accomplished through playbooks and roles available on Ansible Galaxy. Another way to streamline working with BIG-IP is with BIG-IQ Centralized Management. BIG-IQ combines deep, app-centric visibility and dashboarding together with device, configuration, and policy management in a unified, intuitive user interface. From BIG-IQ, you can create new BIG-IP Virtual Editions (VEs), provision them with Declarative Onboarding, create advanced AS3 services, move deployments, upgrade software, and much more. Together Ansible and BIG-IQ make automation and management of your BIG-IP environment simple and straightforward—enabling an effective, intuitive, data-rich, and highly visual solution that offers value to networking/F5 gurus, security practitioners, and application owners/developers alike. To make things even easier, the F5 team has developed several community-supported Ansible roles that are designed to inject automation into workflows and make BIG-IQ’s simple app-centric management functionality even better. Please note that this workflow assumes that you already have a BIG-IQ Centralized Management deployment up and running. The end result will be a fully provisioned BIG-IP deployment that can be fully managed—client-to-server visibility, troubleshooting, object level configuration, etc.—from BIG-IQ’s intuitive, role-specific GUI. You can get started with these roles and workflows today by checking out F5’s repository on Ansible Galaxy. To use these Ansible roles and playbooks, you’ll need to download and install them to a local workstation that will be used for managing F5 deployments. Use the Ansible roles for BIG-IQ below to: Create new VEs Onboard VEs with DO Create and deploy common objects such as SSL certs and WAF policies Create AS3 application delivery and security services Move deployments across BIG-IPs For additional how-to-use resources, guidance, and labs for BIG-IQ, check out the video library and the BIG-IQ labs. Create a BIG-IP VE in AWS tasks: - name: Create a VE in AWS include_role: name: f5devcentral.bigiq_create_ve vars: cloud_environment: "BIG-IQ AWS US-East" ve_name: "bigipvm01" register: status - name: Get AWS BIG-IP VE IP address (port 8443) debug: msg: "{{ ve_ip_address }}" - name: Get AWS BIG-IP VE private Key Filename debug: msg: "{{ private_key_filename }}" Onboard the New BIG-IP VE with Declarative Onboarding tasks: - name: Onboard BIG-IP VE with DO include_role: name: f5devcentral.atc_deploy vars: atc_service: Device atc_method: POST atc_declaration: "{{ lookup('template','do_bigip_aws.j2') }}" atc_delay: 30 atc_retries: 15 register: atc_DO_status do_bigip_aws.j2: { "class": "DO", "declaration": { "schemaVersion": "1.5.0", "class": "Device", "async": true, "Common": { "class": "Tenant", "myLicense": { "class": "License", "licenseType": "licensePool", "licensePool": "byol-pool", "bigIpUsername": "admin", "bigIpPassword": "secret" }, "myProvision": { "class": "Provision", "ltm": "nominal", "avr": "nominal" }, "myNtp": { "class": "NTP", "servers": [ "169.254.169.123" ], "timezone": "UTC" }, "admin": { "class": "User", "shell": "bash", "userType": "regular", "partitionAccess": { "all-partitions": { "role": "admin" } }, "password": "secret" }, "hostname": "bigipvm01.example.com" } }, "targetUsername": "admin", "targetHost": "{{ ve_ip_address }}", "targetPort": 8443, "targetSshKey": { "path": "{{ private_key_filename }}" }, "bigIqSettings": { "conflictPolicy": "USE_BIGIQ", "deviceConflictPolicy": "USE_BIGIP", "failImportOnConflict": false, "versionedConflictPolicy": "KEEP_VERSION", "statsConfig": { "enabled": true } } } Create SSL Certificate and Key on BIG-IQ tasks: - name: Authenticate to BIG-IQ uri: url: https://{{ provider.server }}:{{ provider.server_port }}/mgmt/shared/authn/login method: POST headers: Content-Type: application/json body: username: "{{ provider.user }}" password: "{{ provider.password }}" loginProviderName: "{{ provider.auth_provider | default('tmos') }}" body_format: json timeout: 60 status_code: 200, 202 validate_certs: "{{ provider.validate_certs }}" register: auth - name: Create SSL Certificate and Key on BIG-IQ uri: url: https://{{ provider.server }}:{{ provider.server_port }}/mgmt/cm/adc-core/tasks/certificate-management method: POST headers: Content-Type: application/json X-F5-Auth-Token: "{{ auth.json.token.token }}" body: | { "issuer": "Self", "itemName": "mywebapp.crt", "itemPartition": "Common", "durationInDays": 365, "country": "US", "commonName": "mywebapp.example.com ", "division": "MyDiv", "organization": "MyOrg", "locality": "Seattle", "state": "WA", "subjectAlternativeName": "DNS: mywebapp.example.com", "securityType": "normal", "keyType": "RSA", "keySize": 2048, "command": "GENERATE_CERT" } body_format: json timeout: 60 status_code: 200, 202 validate_certs: "{{ provider.validate_certs }}" register: json_response Pin and Deploy SSL Certificates and Key to BIG-IP tasks: - name: Pin and deploy SSL certificate and key to BIG-IP include_role: name: f5devcentral.bigiq_pinning_deploy_objects vars: bigiq_task_name: "Deployment through Ansible/API - mywebapp" modules: - name: ltm pins: - { type: "sslCertReferences", name: "mywebapp.crt" } - { type: "sslKeyReferences", name: "mywebapp.key" } device_address: "{{ ve_ip_address }}" register: status Deploy an AS3 Service to BIG-IP tasks: - name: Deploy AS3 application services to BIG-IP include_role: name: f5devcentral.atc_deploy vars: atc_service: AS3 atc_method: POST atc_declaration: "{{ lookup('template','as3_bigiq_https_app.j2') }}" atc_delay: 30 atc_retries: 15 register: atc_AS3_status as3_bigiq_https_app.j2: { "class": "AS3", "action": "deploy", "declaration": { "class": "ADC", "schemaVersion": "3.12.0", "target": { "address": "{{ ve_ip_address }}" }, "myorg": { "class": "Tenant", "mywebapp": { "class": "Application", "schemaOverlay": "AS3-F5-HTTPS-offload-lb-existing-cert-template-big-iq-default-v1", "template": "https", "serviceMain": { "class": "Service_HTTPS", "pool": "Pool", "enable": true, "serverTLS": "TLS_Server", "virtualPort": 443, "profileAnalytics": { "use": "Analytics_Profile" }, "virtualAddresses": [ "0.0.0.0" ] }, "Pool": { "class": "Pool", "members": [ { "adminState": "enable", "servicePort": 80, "serverAddresses": 10.1.3.23 } ] }, "TLS_Server": { "class": "TLS_Server", "certificates": [ { "certificate": "Certificate" } ] }, "Certificate": { "class": "Certificate", "privateKey": { "bigip": "/Common/mywebapp.key" }, "certificate": { "bigip": "/Common/mywebapp.crt" } }, "Analytics_Profile": { "class": "Analytics_Profile", "collectIp": false, "collectGeo": false, "collectUrl": false, "collectMethod": false, "collectUserAgent": false, "collectOsAndBrowser": false, "collectPageLoadTime": false, "collectResponseCode": true, "collectClientSideStatistics": true } } } } } Move an AS3 Service Within BIG-IQ Dashboard tasks: - name: Move an AS3 application service in BIG-IQ dashboard. include_role: name: f5devcentral.bigiq_move_app_dashboard vars: apps: - name: myWebApp pins: - name: "myorg_mywebapp" register: status1.5KViews2likes0CommentsThe BIG-IP and Ansible Sandbox, The Perfect Automation Playground!
Ever need a sandbox area to validate and test your F5 BIG-IP Automation? Well look no further! Ansible and F5 have worked together to create a deployable sandbox solution for all your testing needs within AWS. Why do I need a sandbox environment? In most use cases playbooks are never perfect out of the box. We all need an area to test that code and run through many different scenarios to ensure that the code is fully functional, does error detection/correction and doesn't create outages on an environment; All of which needs to be done before that code gets pushed to production. Setting up that kind of lab can take time to setup/configure and can even be problematic to reset back to a "Factory Default" configuration. Having that ability to reset the infrastructure to a clean state that also has the unique add-ons necessary for your testing,makes the lab reusable for testing new or older builds of code. So Where do I Start? Best place to get started is in the F5 Cloud Docs website https://clouddocs.f5.com/training/automation-sandbox/ The website will help with the following Information on setting up the prerequisites (installing Docker, setting up IAM account in AWS, Route 53 Domain, etc. Using the provided DockerFile code to deploy a local container that will be the control node for deploying the AWS environment. Commands to download the Git repository for the Red Hat Automation Platform Workshops (https://github.com/ansible/workshops/) Commands execute the code to deploy or destroy a sandbox Automation Workshop with an F5 Configuration on AWS Basic (101) and advanced (201) use cases. From there the sky is the limit! The sandbox is the perfect automation playground to test any code you might have. What will the Red Hat Ansible Automation Platform Workshops build with the F5 configuration? As per the diagram below the Ansible Automation Platform Workshops will build out and configure an AWS VPC, an Ansible Host, a BIG-IP (Best Licensed) and two Web Servers. The entire environment will be built and configured during the deployment. Each device within the VPC will also have a public IP and private IP addresses for intranet access between the nodes and external internet accessibility for importing and testing playbooks and configurations. When all the code is run and the lab is built, it will provide the essential information related on how to connect to the environment Workshop Name - Unique identifier that was provided in the variables file Instructor Inventory - The hostnames with the public and private IP addresses to connect to the environment with. Private Key - If you wish to use SSH authentication to the Nodes Want to Learn More? Visit F5 Automation Sandbox cloud docs website - https://clouddocs.f5.com/training/automation-sandbox/ Visit Red Hat Ansible Automation Platform Workshops - https://ansible.github.io/workshops/ Visit Red Hat Ansible Automation Platform Git Repository - https://github.com/ansible/workshops/1.2KViews0likes0Comments