Automating F5 NGINX Instance Manager Deployments on VMWare

With F5 NGINX One, customers can leverage the F5 NGINX One SaaS console to manage inventory, stage/push configs to cluster groups, and take advantage of our FCPs (Flexible Consumption Plans). However, the NGINX One console may not be feasible to customers with isolated environments with no connectivity outside the organization. In these cases, customers can run self-managed builds with the same NGINX management capabilities inside their isolated environments. 

In this article, I step through how to automate F5 NGINX Instance Manager deployments with packer and terraform. 

Prerequisites

I will need a few prerequisites before getting started with the tutorial:

  • Installing vCenter on my ESXI host; I need to install vCenter to login and access my vSphere console.
  • A client instance with packer and terraform installed to run my build. I use a virtual machine on my ESXI host.
  • NGINX license keys; I will need to pull my NGINX license keys from MyF5 and store them in my client VM instance where I will run the build.

Deploying NGINX Instance Manager

Deploying F5 NGINX Instance Manager in your environment involves two steps:

  1. Running a packer build outputting a VM template to my datastore
  2. Applying the terraform build using the VM template from step 1 to deploy and install NGINX Instance Manager

Running the Packer Build

Before running the packer build, I will need to SSH into my VM installer and download packer compatible ISO tools and plugins.

$ sudo apt-get install mkisofs && packer plugins install github.com/hashicorp/vsphere && packer plugins install github.com/hashicorp/ansible

Second, pull the GitHub repository and set the parameters for my packer build in the packer hcl file (nms.packer.hcl).

$ git pull https://github.com/nginxinc/nginx-management-suite-iac.git 
$ cd nginx-management-suite-iac/packer/nms/vsphere 
$ cp nms.pkrvars.hcl.example nms.pkrvars.hcl

The table below lists the variables that need to be updated.

nginx_repo_crt Path to license certificate required to install NGINX Instance Manager (/etc/ssl/nginx/nginx-repo.crt)
nginx_repo_key Path to license key required to install NGINX Instance Manager (/etc/ssl/nginx/nginx-repo.key)
iso_path Path of the ISO where the VM template will boot from. The ISO must be stored in my vSphere datastore
cluster_name The vSphere cluster
datacenter The vSphere datacenter
datastore The vSphere datastore 
network The vSphere network where the packer build will run. I can use static IPs if DHCP is not available.

 

Now I can run my packer build

$ export VSPHERE_URL="my-vcenter-url" 
$ export VSPHERE_PASSWORD="my-password" 
$ export VSPHERE_USER="my-username" 
$ ./packer-build.sh -var-file="nms.pkrvars.hcl"

**Note: If DHCP is not available in my vSphere network, I need to assign static ips before running the packer build script. 

Running the Packer Build with Static IPs

To assign static IPs, I modified the cloud init template in my packer build script (packer-build.sh). Under the auto-install field, I can add my Ubuntu Netplan configuration and manually assign my ethernet IP address, name servers, and default gateway.

#cloud-config
autoinstall: 
  version: 1 
  network: 
    version: 2 
    ethernets: 
      addresses: 
      - 10.144.xx.xx/20 
      nameservers: 
        addresses: 
        - 172.27.x.x 
        - 8.8.x.x 
        search: [] 
      routes: 
      - to: default 
        via: 10.144.xx.xx 
  identity: 
    hostname: localhost 
    username: ubuntu 
    password: ${saltedPassword}

 Running the Terraform Build

As mentioned in the previous section, the packer build will output a VM template to my vSphere datastore. I should be able to see the file template in  nms-yyyy-mm-dd/nms-yyyy-mm-dd.vmtx directory of my datastore.

Before running the terraform build, I set parameters in terraform parameter file (terraform.tfvars).

$ cp terraform.ttfvars.example terraform.tfvars 
$ vi terraform.tfvars

The table below lists the variables that need to be updated.

cluster_name The vSphere cluster
datacenter The vSphere datacenter
datastore The vSphere datastore
network The vSphere network to deploy and install NIM
template_name The VM template generated by the packer build (nms-yyyy-mm-dd
ssh_pub_key The public SSH key (~/.ssh_id_rsa.pub)
ssh_user The SSH user (ubuntu)

 

Once parameters are set, I will need to set my env variables. 

$ export TF_VAR_vsphere_url="my-vcenter-url.com" 
$ export TF_VAR_vsphere_password="my-password" 
$ export TF_VAR_vsphere_user="my-username" 
#Set the admin password for NIM user 
$ export TF_VAR_admin_password="my-admin-password"

And initialize/apply my terraform build.

$ terraform init
$ terraform apply

**Note: If DHCP is not available in my vSphere network, I need to assign static IPs once again in my terraform script before running the build.

Assigning Static IPs in Terraform Build (optional)

To assign static IPs, I will need to modify the main terraform file (main.tf). I will add the following clone context inside my vsphere-virtual-machine vm resource and set the options to the appropriate IPs and netmask.

clone { 
    template_uuid = data.vsphere_virtual_machine.template.id 
    customize { 
      linux_options { 
        host_name = "foo" 
        domain    = "example.com" 
      } 
      network_interface { 
        ipv4_address = "10.144.xx.xxx" 
        ipv4_netmask = 20 
        dns_server_list = ["172.27.x.x, 8.8.x.x"] 
      } 
      ipv4_gateway = "10.144.xx.xxx" 
    } 
  } 

Connecting to NGINX Instance Manager

Once the terraform build is complete, I will see the NGINX Instance Manager VM running in the vSphere console.

I can open a new tab in my browser and enter the IP address to connect and login with admin/$TF_VAR_admin_password creds. 

Conclusion

Installing NGINX Instance Manager in your environment is now easier than ever. Following this tutorial, I can install NGINX Instance Manager in under 5 minutes and manage NGINX inventory inside my isolated environment. 

 

Published Oct 17, 2024
Version 1.0
No CommentsBe the first to comment