automation
323 TopicsHow to deploy an F5XC SMSv2 site with the help of automation
To deploy an F5XC Customer Edge (CE) in SMSv2 mode with the help of automation, it is necessary to follow the three main steps below: Verify the prerequisites at the technical architecture level for the environment in which the CE will be deployed (public cloud or datacenter/private cloud) Create the necessary objects at the F5XC platform level Deploy the CE instance in the target environment We will provide more details for all the steps as well as the simplest Terraform skeleton code to deploy an F5XC CE in the main cloud environments (AWS, GCP and Azure). Step 1: verification of architecture prerequisites To be deployed, a CE must have an interface (which is and will always be its default interface) that has Internet access. This access is necessary to perform the installation steps and provide the "control plane" part to the CE. The name of this interface will be referred to as « Site Local Outside or SLO ». This Internet access can be provided in several ways: "Cloud provider" type site: Public IP address directly on the interface Private IP address on the interface and use of a NAT Gateway as default route Private IP address on the interface and use of a security appliance (firewall type, for example) as default route Private IP address on the interface and use of an explicit proxy Datacenter or "private cloud" type site: Private IP address on the interface and use of a security appliance (firewall type, for example) or router as default route Private IP address on the interface and use of an explicit proxy Furthermore, public IP addresses on the interface and "direct" routing to Internet It is highly recommended (not to say required) to add at least a second interface during the site first deployment. Because depending on the infrastructure (for example, GCP) it is not possible to add network interfaces after the creation of the VM. Even on platforms where adding a network interface is possible, a reboot of the F5XC CE is needed. An F5XC SMSv2 CE can have up to eight interfaces overall. Additional interfaces are (most of the time) used as “Site Local Inside or SLI” interfaces or "Segment interfaces" (that specific part will be covered in another article). Basic CE matrix flow. Interface Direction and protocols Use case / purpose SLO Egress – TCP 53 (DNS), TCP 443 (HTTPS), UDP 53 (DNS), UDP 123 (NTP), UDP 4500 (IPSEC) Registration, software download and upgrade, VPN tunnels towards F5XC infrastructure for control plane SLO Ingress – None RE / CE use case CE to CE use case by using F5 ADN SLO Ingress – UDP 4500 Site Mesh Group for direct CE to CE secure connectivity over SLO interface (no usage of F5 ADN) SLO Ingress – TCP 80 (HTTP), TCP 443 (HTTPS) HTTP/HTTPS LoadBalancer on the CE for WAAP use cases SLI Egress – Depends on the use case / application, but if the security constraint permits it, no restriction SLI Ingress – Depends on the use case / application, but if the security constraint permits it, no restriction For advanced details regarding IPs and domains used for: Registration / software upgrade Tunnels establishment towards F5XC infrastructure Please refer to: https://docs.cloud.f5.com/docs-v2/platform/reference/network-cloud-ref#new-secure-mesh-v2-sites Step 2: creation of necessary objects at the F5XC platform level This step will be performed by the Terraform script by: Creating an SMSv2 token Creating an F5XC site of SMSv2 type API certificate and terraform variables First, it is necessary to create an API certificate. Please follow the instructions in our official documentation here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-my-credentials or here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-service-credentials Depending on the type of API certificate you want to create and use (user credential or service credential). In the Terraform variables, those are the ones that you need to modify: The “location of the api key” should be the full path where your API P12 file is stored. variable "f5xc_api_p12_file" { type = string description = "F5XC tenant api key" default = "<location of the api key>" } If your F5XC console URL is https://mycompany.console.ves.volterra.io then the value for the f5xc_api_url will be https://mycompany.console.ves.volterra.io/api variable "f5xc_api_url" { type = string default = "https://<tenant name>.console.ves.volterra.io/api" } When using terraform, you will also need to export the P12 certificate password as an environment variable. export VES_P12_PASSWORD=<password of P12 cert> Creation of the SMSv2 token. This is achieved with the following Terraform code and with the “type = 1” parameter. # #F5XC objects creation # resource "volterra_token" "smsv2-token" { depends_on = [volterra_securemesh_site_v2.site] name = "${var.f5xc-ce-site-name}-token" namespace = "system" type = 1 site_name = volterra_securemesh_site_v2.site.name } Creation of the F5XC SMSv2 site. This is achieved with the following Terraform code (example for GCP). This is where you need to configure all the options you want to be applied at site creation. resource "volterra_securemesh_site_v2" "site" { name = format("%s-%s", var.f5xc-ce-site-name, random_id.suffix.hex) namespace = "system" block_all_services = false logs_streaming_disabled = true enable_ha = false labels = { "ves.io/provider" = "ves-io-GCP" } re_select { geo_proximity = true } gcp { not_managed {} } } For instance, if you want to use a corporate proxy and have the CE tunnels passing through the proxy, the following should be added: custom_proxy { enable_re_tunnel = true proxy_ip_address = "10.154.32.254" proxy_port = 8080 } And if you want to force CE to REs connectivity with SSL, the following should be added: tunnel_type = "SITE_TO_SITE_TUNNEL_SSL" Step 3: creation of the CE instance in the target environment This step will be performed by the Terraform script by: Generating a cloud-init file Creating the F5XC site instance in the environment based on the marketplace images or the available F5XC images How to list F5XC available images in Azure: az vm image list --all --publisher f5-networks --offer f5xc_customer_edge --sku f5xccebyol --output table | sort -k4 -V And check in the output, the one with the highest version. x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:9.2025.17 9.2025.17 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.1 2024.40.1 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.2 2024.40.2 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.44.1 2024.44.1 x64 f5xc_customer_edge f5-networks f5xccebyol_2 f5-networks:f5xc_customer_edge:f5xccebyol_2:2024.44.2 2024.44.2 Architecture Offer Publisher Sku Urn Version -------------- ------------------ ----------- ------------ ----------------------------------------------------- --------- We are going to re-use some of the parameters in the Terraform script, to instruct the Terraform code which image it should use. source_image_reference { publisher = "f5-networks" offer = "f5xc_customer_edge" sku = "f5xccebyol" version = "9.2025.17" } Also, for Azure, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once by running the following commands: Select the Azure subscription in which you are planning to deploy the F5XC CE: az account set -s <subscription-id> Accept the terms and conditions for the F5XC CE for this subscription: az vm image terms accept --publisher f5-networks --offer f5xc_customer_edge --plan f5xccebyol How to list F5XC available images in GCP: gcloud compute images list --project=f5-7626-networks-public --filter="name~'f5xc-ce'" --sort-by=~creationTimestamp --format="table(name,creationTimestamp)" And check in the output, the one with the highest version. NAME CREATION_TIMESTAMP f5xc-ce-crt-20250701-0123 2025-07-09T02:15:08.352-07:00 f5xc-cecrt-20250701-0099-9 2025-07-02T01:32:40.154-07:00 f5xc-ce-202505151709081 2025-06-25T22:31:23.295-07:00 How to list F5XC available images in AWS: aws ec2 describe-images \ --region eu-west-3 \ --filters "Name=name,Values=*f5xc-ce*" \ --query "reverse(sort_by(Images, &CreationDate))[*].{ImageId:ImageId,Name:Name,CreationDate:CreationDate}" \ --output table And check in the output, the ami with the latest creation date. Also, for AWS, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once. Go to this page in your AWS Console Then select "View purchase options" and then select "Subscribe". Putting everything together: Global overview We are going to use Azure as the target environment to deploy the F5XC CE. The CE will be deployed with two NICs, the SLO being in a public subnet and a public IP will be attached to the NIC. We assume that all the prerequisites from step 1 are met. Terraform skeleton for Azure is available here: https://github.com/veysph/Prod-TF/ It's not intended to be the perfect thing, just an example of the minimum basic things to deploy an F5XC SMSv2 CE with automation. Changes and enhancements based on the different needs you might have are more than welcome. It's really intended to be flexible and not too strict. Structure of the terraform directory: provider.tf contains everything that is related to the needed providers variables.tf contains all the variables used in the terraform files f5xc_sites.tf contains everything that is related to the F5XC objects creation main.tf contains everything to start the F5XC CE in the target environment Deployment Make all the relevant changes in variables.tf. Don't forget to export your P12 password as an environment variable (see Step 2, API certificate and terraform variables)! Then run, terraform init terraform plan terraform apply Should everything be correct at each step, you should get a CE object in the F5XC console, under Multi-Cloud Network Connect --> Manage --> Site Management --> Secure Mesh Sites v2255Views3likes1CommentStreamlining Dev Workflows: A Lightweight Self-Service Solution for Bypassing Bot Defense Safely
Automate the update of an F5 Distributed Cloud IP prefix set that’s already wired to a service policy** with the “Skip Bot Defense” option set. An approved developer hits a simple, secret endpoint; the system detects their current public IP and updates the designated IP set with a `/32`. Bot Defense is skipped for that IP on dev/test traffic—immediately. No tickets. No console spelunking. No risky, long-lived exemptions. At a glance Self-service: Developers add their _current_ IP in seconds. - Tight scope: Changes apply only to the dev/test services attached to that policy.62Views1like0CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.895Views5likes2CommentsHow to deploy an F5XC SMSv2 for KVM with the help of automation
Typical KVM architecture for F5XC CE The purpose of this article isn't to build a complete KVM environment, but some basic concepts should be explained. To be deployed, a CE must have an interface (which is and will always be its default interface) that has Internet access. This access is necessary to perform the installation steps and provide the "control plane" part to the CE. The name of this interface will be referred to as « Site Local Outside or SLO ». It is highly recommended (not to say required) to add at least a second interface during the site first deployment. Because depending on the infrastructure (for example, GCP) it is not possible to add network interfaces after the creation of the VM. Even on platforms where adding a network interface is possible, a reboot of the F5XC CE is needed. An F5XC SMSv2 CE can have up to eight interfaces overall. Additional interfaces are (most of the time) used as “Site Local Inside or SLI” interfaces or "Segment interfaces" (that specific part will be covered in another article). To match the requirements, one typical KVM deployment is described below. It's a way of doing things, but it's not the only way. But most likely the architecture will be composed of: a Linux host user space softwares to run KVM: qemu libvirt virt-manager Linux Network Bridge Interfaces with KVM Networks mapped to those interfaces CE interfaces attached to KVM Networks This is what the diagram below is picturing. KVM Storage and Networking We will use both components in the Terraform automation. KVM Storage It is necessary to define a storage pool for the F5XC CE virtual machine. If you already have a storage pool, you can use it. Otherwise, here is how to create one: sudo virsh pool-define-as --name f5xc-vmdisks --type dir --target /f5xc-vmdisks To create the target directory (/f5xc-vmdisks): sudo virsh pool-build f5xc-vmdisks Start the storage pool: sudo virsh pool-start f5xc-vmdisks Ensure the storage pool starts automatically on system boot: sudo virsh pool-autostart f5xc-vmdisks KVM Networking Assuming you already have bridge interfaces configured on the Linux host but no KVM Networking yet. KVM SLO networking Create an XML file (kvm-net-ext.xml) with the following: <network> <name>kvm-net-ext</name> <forward mode="bridge"/> <bridge name="br0"/> </network> Then run: virsh net-define kvm-net-ext.xml virsh net-start kvm-net-ext virsh net-autostart kvm-net-ext KVM SLI networking Create an XML file (kvm-net-int.xml) with the following: <network> <name>kvm-net-int</name> <forward mode="bridge"/> <bridge name="br1"/> </network> Then run: virsh net-define kvm-net-int.xml virsh net-start kvm-net-int virsh net-autostart kvm-net-int Terraform Getting the F5XC CE QCOW2 image Please use this link to retrieve the QCOW2 image and store it in your KVM Linux host. Terraform variables vCPU and Memory Please refer to this page for the cpu and memory requirement. Then adjust: variable "f5xc-ce-memory" { description = "Memory allocated to KVM CE" default = "32000" } variable "f5xc-ce-vcpu" { description = "Number of vCPUs allocated to KVM CE" default = "8" } Networking Based on your KVM Networking setup, please adjust: variable "f5xc-ce-network-slo" { description = "KVM Networking for SLO interface" default = "kvm-net-ext" } variable "f5xc-ce-network-sli" { description = "KVM Networking for SLI interface" default = "kvm-net-int" } Storage Based on your KVM storage pool, please adjust: variable "f5xc-ce-storage-pool" { description = "KVM CE storage pool name" default = "f5xc-vmdisks" } F5XC CE image location variable "f5xc-ce-qcow2" { description = "KVM CE QCOW2 image source" default = "<path to the F5XC CE QCOW2 image>" } Cloud-init modification It's possible to configure static IP and Gateway for the SLO interface. This is done in the cloud-init part of the Terraform code. Please specify slo_ip and slo_gateway in the Terraform code. Like below. data "cloudinit_config" "config" { gzip = false base64_encode = false part { content_type = "text/cloud-config" content = yamlencode({ write_files = [ { path = "/etc/vpm/user_data" permissions = "0644" owner = "root" content = <<-EOT token: ${replace(volterra_token.smsv2-token.id, "id=", "")} slo_ip: 10.154.1.100/24 slo_gateway: 10.154.1.254 EOT } ] }) } } If you don't need a static IP, please comment or remove those two lines. Sample Terraform code Is available here. Deployment Make all the changes necessary in the Terraform variables and in the cloud-init. Then run: terraform init terraform plan terraform apply Should everything be correct at each step, you should get a CE object in the F5XC console, under Multi-Cloud Network Connect --> Manage --> Site Management --> Secure Mesh Sites v2103Views3likes0CommentsAutomating Certificate Management on F5 BIG-IP
The Certificate Lifespan Revolution Welcome to part one of our two-part series on certificate automation for the BIG-IP platform. Certificate lifecycle management is undergoing a seismic shift that will fundamentally change how organizations handle SSL/TLS certificates. The era of comfortable 12-13 month certificate lifespans is rapidly ending. Major Certificate Authorities and browser vendors are pushing aggressively toward 90-day or even 47-day certificates, and some are proposing even shorter durations. This transformation represents more than a simple policy change. It's a complete paradigm shift that renders traditional certificate management approaches obsolete. The familiar rhythm of annual renewals, managed through spreadsheets and calendar reminders, becomes not just inefficient but operationally impossible when certificates expire every three months or 47 days. The Challenge: Why Manual Processes Are Doomed Organizations worldwide are recognizing the writing on the wall. Large PKI environments that once managed hundreds or thousands of certificates annually now face the prospect of managing them on a quarterly basis. The mathematical reality is stark: a 75% reduction in certificate lifespan translates to a 400% increase in the frequency of renewals. At F5, we hear you, we understand the anxiety this creates. We hear from customers daily who are grappling with this operational challenge. The F5 Solution: ACME Protocol Implementation While third-party vendors like Venafi and DigiCert offer comprehensive certificate automation platforms, this guide focuses on F5 solutions that leverage your existing BIG-IP infrastructure. Our approach is based on the ACME (Automatic Certificate Management Environment) protocol. This is the same technology that powers Let’s Encrypt and other modern certificate authorities. The solution uses a specialized ACME implementation called "dehydrated," adapted for BIG-IP by F5's own Kevin Stewart, with Python code inspired by Jason Rahm. You can read Kevin's DevCentral article here. Getting Started: Prerequisites and Setup Before diving into the implementation, ensure you have: Active BIG-IP with the LTM module and basic networking configured Registered domain with DNS management access for A record modifications (if setting up a new domain for testing purposes) Internet connectivity for downloading the ACME script and communicating with certificate authorities This solution is straightforward to configure and deploy. Follow along below. Step 1: Download and Explore Connect to your BIG-IP shell via SSH and download the script: curl -s https://raw.githubusercontent.com/f5devcentral/kojot-acme/main/install.sh | bash Navigate to the installation directory and explore the config file. This is where you can customize key size, contact information, OCSP stapling and more. cd /shared/acme The script adds a few new shiny toys to your BIG-IP. You will find two new data groups and a new iRule. The data group dg_acme_config houses the subject and CA or your certificate. We will need to modify this data group before we run the script; more on this in a moment. The dg_acme_challenge holds the challenge tokens and is ephemeral, cleaning up when the process completes. The iRule handles the HTTP-01 challenge. You need this iRule applied to the port 80 VIP when you run the script. At a high level, the iRule intercepts the challenge request from the ACME server and responds with an appropriate challenge response. In scenarios without a proxy, such as the BIG-IP, the web server handles this. Alternatively, you can use a DNS challenge by adding a TXT record at your registrar. The HTTP method is much simpler, so that’s what we will use here. Step 2: Configure Your Virtual IP Create an HTTP Virtual IP (VIP) to handle the ACME challenges. Production environments usually need both HTTP (port 80) and HTTPS (port 443) VIPs. Our demonstration uses a simple HTTP-only setup. Critical requirement: The ACME iRule must be applied to your port 80 VIP to enable HTTP-01 challenge processing. Step 3: Configure the Data Group - dg_acme_config We need to add our domain and CA information at a minimum. Add your domain to the string field and your CA information to the value field. See the documentation for more examples. Step 4: Configure DNS If you already have a domain configured pointing to your HTTP VIP, then skip this step. However, if you are setting up a new domain, please make sure that DNS is pointing to your VIP IP address or A record. For this demo, I purchased a low-cost domain and configured the A record to point to my AWS Elastic IP, which is attached to my private IP in my VPC. Step 5: Run the script Ok, you have DNS configured, VIP configured, and the data group modified with your domain and CA information. You are ready to roll. Go to the directory and run the script. cd /shared/acme ./f5acmehandler.sh -verbose The verbose flag provides real-time feedback on the ACME communication process, invaluable for troubleshooting and understanding the workflow. Success? If you are successful, you should see output similar to the screenshot below. If not, then the details of the script workflow should clue you into the problem. The GitHub repo also offers some troubleshooting advice. Also, let's see if we got a brand new shiny certificate. Yep. What's Next? While this created a cert and key, it didn't place them into an SSL profile. This is possible by modifying the config file. Scheduling checks could also be on your list. Kevin's article highlights this and provides an example. This will run the script every Monday at 4 am. ./f5acmehandler.sh --schedule "00 04 * * 1" There are other advanced capabilities that I did not configure or explain in this article. I encourage you to visit the DevCentral repo for more information. A key consideration for Federal agencies is the ability to support air-gapped or closed networks. However, requirements and compliance hurdles will need to be examined further. In the following article, we will cover the Certificate Order capability built into the BIG-IP. Until next time.2.6KViews0likes2CommentsMastering Imperative and Declarative Automation with F5 BIG-IP – AppWorld 2025
At AppWorld 2025, Anton Varkalevich and I will be hosting a hands-on lab, "Mastering Imperative and Declarative Automation with F5 BIG-IP." I'm Matt Mabis, a Senior Solutions Architect in the Business Development/Technical Alliances department at F5. I specialize in solution integrations with our partnerships with Red Hat (Ansible and OpenShift), VMware, and Omnissa Horizon. Anton Varkalevich is a Solutions Engineer III for Financial Services at F5, specializing in ensuring financial customers have the right solutions to meet their unique needs. Automation plays a critical role in financial institutions, enabling them to serve their customers quickly and effectively. This lab is the result of years of experience working with customers to streamline application deployments on F5 BIG-IP. We’ll use Ansible Automation Platform to demonstrate both imperative and declarative automation approaches. Participants will first deploy common use cases—such as HTTPS and SSL-delivered applications—using imperative automation with individual Ansible modules. Then, we’ll achieve the same outcomes using declarative automation, offering a side-by-side comparison to help attendees choose the best approach for their needs. By the end of this lab, you’ll have a solid understanding of both automation styles and how to apply them effectively in your environment. Join us at AppWorld 2025—we look forward to sharing our knowledge with you!275Views0likes0CommentsInstalling and Locking a Specific Version of F5 NGINX Plus
A guide for installing and locking a specific version of NGINX Plus to ensure stability, meet internal policies, and prepare for controlled upgrades. Introduction The most common way to install F5 NGINX Plus is by using the package manager tool native to your Linux host (e.g., yum, apt-get, etc.). By default, the package manager installs the latest available version of NGINX Plus. However, there may be scenarios where you need to install an earlier version. To help you modify your automation scripts, we’ve provided example commands for selecting a specific version. Common Scenarios for Installing an Earlier Version of NGINX Plus Your internal policy requires sticking to internally tested versions before deploying the latest release. You prefer to maintain consistency by using the same version across your entire fleet for simplicity. You’d like to verify and meet additional requirements introduced in a newer release (e.g., NGINX Plus Release 33) before upgrading. Commands for Installing and Holding a Specific Version of NGINX Plus Use the following commands based on your Linux distribution to install and lock a prior version of NGINX Plus: Ubuntu 20.04, 22.04, 24.04 LTS sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus Debian 11, 12 sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus AlmaLinux 8, 9 / Rocky Linux 8, 9 / Oracle Linux 8.1+, 9 / RHEL 8.1+, 9 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus Amazon Linux 2 LTS, 2023 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus SUSE Linux Enterprise Server 12, 15 SP5+ sudo zypper install nginx-plus=<VERSION> sudo zypper addlock nginx-plus Alpine Linux 3.17, 3.18, 3.19, 3.20 apk add nginx-plus=<VERSION> echo "nginx-plus hold" | sudo tee -a /etc/apk/world FreeBSD 13, 14 pkg install nginx-plus-<VERSION> pkg lock nginx-plus Notes Replace <VERSION> with the desired version (e.g., 32-2*). After installation, verify the installed version with the command: nginx -v. Holding or locking the package ensures it won’t be inadvertently upgraded during routine updates. Conclusion Installing and locking a specific version of NGINX Plus ensures stability, compliance with internal policies, and proper validation of new features before deployment. By following the provided commands tailored to your Linux distribution, you can confidently maintain control over your infrastructure while minimizing the risk of unintended upgrades. Regularly verifying the installed version and holding updates will help ensure consistency and reliability across your environments.290Views0likes0CommentsWhat is an iApp?
iApp is a seriously cool, game changing technology that was released in F5’s v11. There are so many benefits to our customers with this tool that I am going to break it down over a series of posts. Today we will focus on what it is. Hopefully you are already familiar with the power of F5’s iRules technology. If not, here is a quick background. F5 products support a scripting language based on TCL. This language allows an administrator to tell their BIG-IP to intercept, inspect, transform, direct and track inbound or outbound application traffic. An iRule is the bit of code that contains the set of instructions the system uses to process data flowing through it, either in the header or payload of a packet. This technology allows our customers to solve real-time application issues, security vulnerabilities, etc that are unique to their environment or are time sensitive. An iApp is like iRules, but for the management plane. Again, there is a scripting language that administrators can build instructions the system will use. But instead of describing how to process traffic, in the case of iApp, it is used to describe the user interface and how the system will act on information gathered from the user. The bit of code that contains these instructions is referred to as an iApp or iApp template. A system administrator can use F5-provided iApp templates installed on their BIG-IP to configure a service for a new application. They will be presented with the text and input fields defined by the iApp author. Once complete, their answers are submitted, and the template implements the configuration. First an application service object (ASO) is created that ties together all the configuration objects which are created, like virtual servers and profiles. Each object created by the iApp is then marked with the ASO to identify their membership in the application for future management and reporting. That about does it for what an iApp is…..next up, how they can work for you.1.7KViews0likes4CommentsNGINX Virtual Machine Building with cloud-init
Traditionally, building new servers was a manual process. A system administrator had a run book with all the steps required and would perform each task one by one. If the admin had multiple servers to build the same steps were repeated over and over. All public cloud compute platforms provide an automation tool called cloud-init that makes it easy to automate configuration tasks while a new VM instance is being launched. In this article, you will learn how to automate the process of building out a new NGINX Plus server using cloud-init.945Views3likes4Comments