Infrastructure as Code: Automating F5 Distributed Cloud CEs with Ansible

Introduction

Welcome to the first installment of our Infrastructure as Code (IaC) series, focusing on F5 products and Ansible. This series has been a long-standing desire of mine to showcase the ability of IaC utilizing Ansible Automation Platform to deliver Day 0 through Day 2 operations with multiple F5 virtualized platforms.

Over time, I've encountered numerous financial clients expressing interest in this topic. For many of these clients, the prospect of leveraging IaC to redeploy an environment outweighs the traditional approach of performing upgrades. This series will hopefully provide insight, documentation, and code for anyone embarking on this journey.

 

Why Ansible Automation Platform?

Like most people, I started my journey with community editions of Ansible. As my coding became more complex, so did the need to ensure that my lab infrastructure adhered to the best security guidelines required by my company (my goal being to mimic how customers would/should do things in real life). I began utilizing Ansible Automation Platform to ensure my credentials were protected, as well as to organize and share my code with the rest of my team (following the 'just in case you got hit by a bus' theory).

Ansible Automation Platform utilizes execution environments (EE) to ensure code runs efficiently and cleanly every time. Now, I am also creating Execution Environments via GitHub with workflows and pushing them up to Quay.io (https://github.com/VDI-Tech-Guy/f5-execution-engines). Huge thanks to Colin McNaughton at Red Hat for making my life so much easier with building EEs!

 

Why deploy F5 Distributed Cloud on VMware vSphere?

As I mentioned before, I had this desire to build this Infrastructure as Code (IaC) code a while back. This was prior to the Broadcom acquisition of VMware.  Being an ex-VMware employee, I had a lot of knowledge of virtualization platform infrastructure going into this project, and I started my focus on deploying on VMware vSphere.

F5 Distributed Cloud can be deployed in any cloud, anywhere. However, I really wanted to focus on on-premises deployments because not every customer can afford the cloud. Moreover, there's always a back-and-forth battle between on-premises and the cloud, which has evolved into the Hybrid Cloud and the Multi-Cloud.

I do intend to extend this series to the Multi-Cloud, but these initial deployments will be focused on VMware vSphere, as it is still utilized in many organizations across the globe.


Information about the Setup in the Demo Video

If you watch the video (down below) on how the deployment works, you can see i did a bunch of the pre-work prior to launching the deployment, in the git repostory (link in Resources). 

Here are some Prework items i did 

  1. Had a fully functional Ansible Automation Platform 2.4+ enviornment setup and working.  (at the time the controller version was 4.4.4)
  2. Execution Environment was imported into Ansible Automation Platform Controller
     

     

  3. The Project was setup to import the Playbooks from the Git Repository (In Resources Section below) and setup the Default Execution Environment
  4. Demo Inventory was setup (in our usecase we only needed the vCenter Host) 
  5. We Setup Network Credentials for the vCenter
  6. The Template was setup and had Variables populated in it (Note the API Key was hidden).
  7. As mentioned in the Video (Below) The variables were populated to my environment, this contains all the information, i have provided a Demo Example in the git repository for anyone to mimic my settings to their environment, also the example has comments about each field or area of a field and the purpose of the variable. 

    {
      "rhel_location": "https://vesio.blob.core.windows.net/releases/rhel/9/x86_64/images/vmware/rhel-9.2023.29-20231212012955-single-nic.ova",
      "xc_api_credential": "_____________________________________",
      "xc_namespace": "mmabis-automation",
      "xc_console_host": "f5-bd",
      "xc_user": "admin",
      "xc_pass": "Ansible123!",
      "vcenter_hostname": "{{ ansible_host }}",
      "vcenter_username": "{{ ansible_env.ANSIBLE_NET_USERNAME }}",
      "vcenter_password": "{{ ansible_env.ANSIBLE_NET_PASSWORD }}",
      "vcenter_validate_certs": false,
      "datacenter_name": "Apex",
      "cluster_name": "Worlds-Edge",
      "datastore": "TrueNAS-SSD",
      "dvs_switch_name": "DSC-DVS",
      "dns_name_servers": [
        "192.168.192.20",
        "192.168.192.1"
      ],
      "dns_name_search": [
        "dsc-services.local",
        "localdomain"
      ],
      "ntp_servers": [
        "0.pool.ntp.org",
        "1.pool.ntp.org",
        "2.pool.ntp.org"
      ],
      "domain_fqdn": "dsc-services.local",
      "DVS_Name": "{{dvs_switch_name}}",
      "Internal_Network": "DVS-Server-vLan",
      "External_Network": "DVS-DMZ-vLan",
      "resource_pool_name": "Lab-XC",
      "waiting_period": 2,
      "temp_download_location": "/tmp/xc-ova-download.ova",
      "xc_ova_builds": [
        {
          "hostname": "xc-automation-rhel-demo",
          "tmpl_name": "xc-automation-rhel-demo",
          "admin_password": "Ansible123!",
          "cluster_name": "xc-automation-cluster-rhel-demo",
          "dhcp": "no",
          "external_ip": "172.16.192.170",
          "external_ip_subnet_prefix": "24",
          "external_ip_gw": "172.16.192.1",
          "external_ip_route": "0.0.0.0/0",
          "internal_ip": "192.168.192.170",
          "internal_ip_subnet_prefix": "22",
          "internal_ip_gw": "192.168.192.1",
          "certified_hw": "vmware-regular-nic-voltmesh",
          "latitude": "39.51833126",
          "longitude": "-104.759496962",
          "build_count": 3,
          "nic_config": "rhel-multi"
        }
      ]
    }

Launching the Code

With all of that prework Handled it was as easy as launch the code, there were a few caviats i learned over time when dealing with the atuomation that i wanted to share.

  • Never re-use a cluster name in F5 Distributed Cloud, especially if it was used in a different version of the CE (there were communications issues with the CEs and previous cluster information that was stored in F5 Distributred Cloud Console) 
  • The Api Credentials are system level when trying to accept registration or create the token for importing in to the environment.  This code is designed to check for  "{{ xc-namespace}}-token" if it exists then it will utilize the existing token, if not it will try to create it so you need system level permissions to do this. 
  • Build Count should be 3 by default (still needs to be defined) or an ODD number based on recomendations i have heard from our F5 Field. 

If there are more that i think of ill definatly edit the post and make sure its up-to-date.   When launching the code i was able to get the lab to build up correctly multiple times, so please if there is an issue or something i might not have documented well, feel free to let me know and give it a shot for yourself! 

 

YouTube Video now on DevCentral Channel


Resources

 


Conclusion

I do hope that this series will help everyone who wants to embrace IaC and if you have any questions feel free to reach out! 

Published Mar 22, 2024
Version 1.0

Was this article helpful?

No CommentsBe the first to comment