Onboarding F5 in Cloud Part 1 - Startup Scripts
In a previous article, we discussed how we leveraged cloud-init to pass startup scripts to BIG-IP Virtual Editions (VE) in AWS. Now that v13 is here, lets take this multi-cloud!
As you can guess, you can pass startup scripts in other environments as well. The input names are a *little* different, because, of course, everyone has to be different, but no worries, the effect is still the same. To see what this looks like, lets take a look at this through the lens of some popular orchestration tools.
Let's start with Terraform.
Terraform is a popular multi-cloud tool. Much like Cloudformation, it aims to provide a declaritive approach to provisioning your entire stack, everything from network to compute to application services. It does this by having it's own state engine and providing a simple yet elegant DSL known as Hashicorp Configuration Language (HCL).
Openstack:
In Openstack, that's easy. Starting with BIG-IP v13, our images have cloud-init installed as well and you use the exact same "user_data" field to pass that startup script.
The only thing different is you need to enable something called "config_drive" because openstack places the data on a drive mounted to the instance and cloud-init looks for the user_data there vs. grabbing it over the network.
Google Compute Engine (GCE):
In GCE, we've installed a bit of custom code on the BIG-IPs to leverage the obscurely named "startup_script" parameter 🙂 Kudos to GCE to keeping things nice and simple here!
https://www.terraform.io/docs/providers/google/r/compute_instance.html#metadata_startup_script
However, they did add a tiny twist. You can also pass your startup script by placing it in a metadata key with the name "startup_script" so depending on how you pass it, you get a different behavior. If you pass it via a metadata key, any changes cause the script to be rerun, if you pass it via instance parameter, any changes cause the instance to be recreated.
Microsoft Azure:
Not necessarily new in v13 but In Azure, we've installed their version of cloud-init called wa-agent and mostly leverage something called the "custom-script" virtual_machine_extension.
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html
Underneath the hood, wa-agent parses this input a little differently, through a "commandToExecute" field, but the effect is similar. They also have another parameter called custom_data to let you pass a single file like in clouds above which we will leverage for consistency.
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html#custom_data
For comparison, let's take a look at Ansible.
In this article, we discussed leveraging Ansible to deploy and configure BIG-IPs in AWS. However, in that, we onboarded and configured Big-IPs externally using REST API calls. However, if we wanted to use start up scripts instead, for instance doing Auto scaling, lets see if we can find the same parameters in Ansible's modules/defintions.
- user_data:
- ec2 instance: https://docs.ansible.com/ansible/latest/ec2_module.html
- Auto Scale Launch Config: http://docs.ansible.com/ansible/latest/ec2_lc_module.html
- userdata: https://docs.ansible.com/ansible/os_server_module.html
- startup_script: https://docs.ansible.com/ansible/gce_module.html
Now, let's take a look at an actual BIG-IP startup script example.
In this Terraform template,
https://github.com/f5devcentral/f5-terraform/blob/master/modules/providers/aws/infrastructure/proxy/standalone/1nic/byol/bigip.tf
you can see in the instance definition,
resource "aws_instance" "bigip" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "${var.instance_type}"
associate_public_ip_address = "${var.create_management_public_ip}"
availability_zone = "${var.availability_zone}"
subnet_id = "${var.subnet_id}"
vpc_security_group_ids = ["${aws_security_group.sg.id}"]
iam_instance_profile = "${aws_iam_instance_profile.proxy_service_discovery_profile.name}"
key_name = "${var.ssh_key_name}"
root_block_device { delete_on_termination = true }
tags {
Name = "${var.environment}-proxy"
environment = "${var.environment}"
owner = "${var.owner}"
group = "${var.group}"
costcenter = "${var.costcenter}"
application = "${var.application}"
}
user_data = "${data.template_file.user_data.rendered}"
}
the user_data field points to the "rendered" output of a template file (user_data.tpl).
NOTE: The user_data can be a static string or file but these orchestration tools provide a handy templating mechanism so we can make this input dynamic based on the inputs or outputs of objects created. In Terraform's case, it interpolates a "${variable}" into whatever variable you pass it via the "vars" parameter.
Here is "template_file" object we pass the variables in.
data "template_file" "user_data" {
template = "${file("${path.module}/user_data.tpl")}"
vars {
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
management_gui_port = "${var.management_gui_port}"
dns_server = "${var.dns_server}"
ntp_server = "${var.ntp_server}"
timezone = "${var.timezone}"
region = "${var.region}"
application = "${var.application}"
vs_dns_name = "${var.vs_dns_name}"
vs_address = "${var.vs_address}"
vs_mask = "${var.vs_mask}"
vs_port = "${var.vs_port}"
pool_member_port = "${var.pool_member_port}"
pool_name = "${var.pool_name}"
pool_tag_key = "${var.pool_tag_key}"
pool_tag_value = "${var.pool_tag_value}"
site_ssl_cert = "${var.site_ssl_cert}"
site_ssl_key = "${var.site_ssl_key}"
license_key = "${var.license_key}"
}
}
NOTE: The ${variable} syntax also happens to be bash variable syntax so if you're generating a bash script like we are and want the end result to remain a bash variable, you need to escape it with another "$", ex. $${my_variable} 🙂 Ansible leverages common jinja templating so your user_data template would have {{ my_variable }} instead.
Now, lets look at input scripts in other environments:
- Openstack: https://github.com/f5devcentral/f5-terraform/blob/master/modules/providers/openstack/infrastructure/proxy/standalone/1nic/byol/user_data.tpl
- GCE: https://github.com/f5devcentral/f5-terraform/blob/master/modules/providers/gce/infrastructure/proxy/standalone/1nic/byol/user_data.tpl
- Azure: https://github.com/f5devcentral/f5-terraform/blob/master/modules/providers/azure/infrastructure/proxy/standalone/1nic/byol/user_data.tpl
Notice the pattern here. We can reuse almost the same startup script save for a few small variations.
We leverage these same instance customization parameters in the various platform's native orchestration tools like AWS Cloudformation, Azure ARM templates, Google Deployment templates and Openstack HEAT templates. For more examples of leveraging these inputs, see our official templates which leverage more best practices, intergrations, etc.
- https://github.com/F5Networks/f5-aws-cloudformation
- https://github.com/F5Networks/f5-azure-arm-templates
- https://github.com/F5Networks/f5-google-gdm-templates
- https://github.com/F5Networks/f5-openstack-hot
Like the Terraform templating mechanism or Ansible's jinja templating, these native tools are able to dynamically generate the payloads from input parameters and outputs of objects created as well. And, as you may want to customize the deployments some more (ex. to rename something to your conventions, download or configure some other policy, etc.) we try to point you to a safe section to customize. For instance, search for section to modify by searching for "### START".
ex.
### START CUSTOM
add your customizations here.
### END CUSTOM
You can even deploy these templates through other orchestration tools.
- https://plugins.jenkins.io/jenkins-cloudformation-plugin
- http://docs.ansible.com/ansible/latest/cloudformation_module.html
and amazingly through other template based tools.
ex. Deploying a cloudformation template through terraform.
https://github.com/f5devcentral/f5-terraform/blob/master/modules/providers/aws/infrastructure/proxy/autoscale/1nic-cft/util/bigip.tf
By leveraging the same tooling used to deploy your apps, you can begin to make deployments both quicker and repeatable.