VE on VMware - Part 2 - Ansible Deployment

Review, Overview

Part one of this series described how BIG-IP Virtual Edition (VE) can be customized at deployment time in a VMware environment. This article will show how this capability, combined with Ansible, can be used to automate the entire VE deployment.

The phrase is practically an axiom, but it bears repeating: every environment is different. Taking this further, every orchestrated environment is different. Your deployment target may be a clean slate, or you might be working within a web of shell scripts that have a mainstream orchestration framework grafted on. Consider this example as illustrative, but not meant to be used as a drop-in example. I will explain my reasoning for each component.

Information Visibility

As mentioned in part one, BIG-IP v13.1.0.2 and later have tighter integration with open-vm-tools. The direct benefits of this integration are:

  • Increased hypervisor visibility into the running VE.
  • Increased hypervisor inspection from the VE.
  • Injection of per-instance information at the time of deployment.

Visibility into the running VE is improved by open-vm-tools. Consider that the hypervisor obviously knows what virtual hardware is in use, and the state of the virtual machine (guest). The hypervisor can offer IP addresses to the guest on the networks that it is attached to, but the guest is not required to use them. If the guest chooses to run open-vm-tools, then the hypervisor can query the guest's running configuration. The direct benefit of this integration is that the VE's management IP can be discovered by API calls against the hypervisor. This may seem like a small thing, but It's a huge benefit when you lose internal domain resolution, or your IPAM service is down.

Note: Self-IPs and virtual servers cannot be discovered in this way. You must use the published BIG-IP APIs, or another source of configuration information, to discover self-IPs and listeners.

From the other direction, a guest running open-vm-tools has a direct channel to information offered by the hypervisor. We use this capability to expose the properties from part one for configuring the management address and non-default credentials.

This brings me to per-instance information. The previous two information flows, host to guest and guest to host, allow for predictable and resilient access to the VE management interface when external infrastructure has failed. Per-instance information assists the programmatic creation of unique VEs from a single template. It is possible for a VE to be created, do useful work for an indeterminate period of time, and be reclaimed without any human interaction.

 

Ansible Obscures the Mundane

Before we begin to automate a task, we must first understand the components of a task. A classic example is preparing a simple snack. Let us define the process for making toast. We will assume that the bread is already baked, the toaster is already constructed, and our spread of choice is already prepared. What are the steps?

  1. Remove the bread from storage.
  2. Apply a measured amount of heat to the bread (toaster, oven, fire, pan on stove, etc.)
  3. Remove the toasted bread from the heat source, and place it on a work surface.
  4. Remove our chosen spread from storage.
  5. Apply the desired amount of spread to the toast.

At this point the toast has been prepared. There are additional tasks to perform after this process is complete, like cleanup and storing extra material. Those are outside of the scope of the example.

These are the steps for deploying a VE on VMware. We will automate each of these steps using Ansible.

  1. Obtain the management network information and non-default credentials to be used for this VE.
  2. Deploy the OVA to the specified VMware cluster. The information from step 1 is applied in this step.
  3. Tell the hypervisor to "power on" the VE.
  4. Wait a specified amount of time for initial boot processes to complete.
  5. Set the hostname of the VE. This step verifies that the VE is functional, nominal, and ready for further configuration.

Each of these steps, except for the wait in step four, requires multiple API transactions across one or more TCP connections. This would be annoying to do by hand, so let's try to build a process for infinite reuse. Furthermore, we'll build as little of it as reasonably possible. We will use Ansible modules, and one templated shell script, to prevent us from hand-coding multiple API calls.

We can encapsulate this process in Ansible with a simple logical structure. Ansible uses the term "playbook" to describe a group of "plays" to be performed. Each of the above tasks will become an Ansible play. Those plays will be rolled into an Ansible playbook.

An Assemblage of Text Files

I would like to describe the files that are involved. This will impose structure upon the discussion. Consider this tree view of my working directory.

.
+-- deploy_vmware_guest.yaml               # this is the playbook that we will execute
+-- host_vars
|   +-- vro-lab-vsphere
|       +-- vault.yaml                     # this file contains encrypted credentials
+-- img
|   +-- bigip-vmware-empty-properties.ova  # image file that we will send to the hypervisor
+-- inventory
|   +-- dynamic_inventory.py               # this script emulates an external data service
|   +-- static_inventory                   # this file contains immutable configuration information
+-- templates
    +-- deploy_ova.bash.j2                 # this bash script with jinja2 templating will perform the deployment

"dynaminc_inventory.py" and "static_inventory" - Ansible needs to know what to perform work against before it can actually execute tasks. Recall that step one is the gathering of information specific to this VE instance. Ansible satisfies this step with the concept of a dynamic inventory, meaning that it can get fresh information during execution. Ansible must also be aware of the static inventory information, namely the hypervisor infrastructure. This is handled by the contents of the inventory directory.

"bigip-vmware-empty-properties.ova" - We need an image to deploy on the hypervisor. This is often kept in a volume local to the hypervisor.

"vault.yaml" - Our playbook needs to run unattended, so providing hypervisor credentials at runtime is not acceptable. We satisfy this by using Ansible's vault functionality. The documentation can be found here.

"deploya_ova.bash.j2" - The easiest way to deploy the image in my environment is to use VMware's tried-and-true utility named "ovftool". However, we need to pass dynamic information to it at runtime. This is satisfied by applying Jinja2 templating to a bash script. You can learn more about Jinja2 here.

"deploy_vmware_guest.yaml" - This is the Ansible playbook that will automate each of the tasks.

You may notice that "host_vars" is an oddly named directory, and "vro-lab-vsphere" looks like a hostname. This is on purpose. Ansible allows you to structure your data through defined directory conventions. This topic is explored in the Ansible documentation.

A sanitized version of the code referenced above can be found here.

Putting Ansible to Work

When everything works as intended, it looks like this:

We can see the basic flow of events described in our numbered list. The output ends with the recap, or scorecard, that sums and categorizes the result of each task. They are grouped by each host that the tasks were logically bound to.

What's Next?

The successful playbook execution produces a functional VE running on a hypervisor. Computers do not get to be lazy, so put it to work! Go build something awesome! You might use some of the code we've written to help you along that path. You can find it on Github.

There will eventually be a set of VMware-specific tools, similar to the f5-cloud-libs.

Published Feb 15, 2018
Version 1.0

Was this article helpful?

No CommentsBe the first to comment