infrastructure as code
4 TopicsUsing F5's Terraform modules in an air-gapped environment
Introduction IT Industry research, such as Accelerate, shows improving a company's ability to deliver software is critical to their overall success. The following key practices and design principles are cornerstones to that improvement. Version control of code and configuration Automation of Deployment Automation of Testing and Test Data Management "Shifting Left" on Security Loosely Coupled Architectures Pro-active Notification F5 has published Terraform modules on GitHub.com to help customers adopt deployment automation practices, focused on streamlining instantiation of BIG-IPs on AWS, Azure, and Google. Using these modules allows F5 customers to leverage their embedded knowledge and expertise. But we have limited access to public resources Not all customer Terraform automation hosts running the CLI or enterprise products are able to access public internet resources like GitHub.com and the Terraform Registry. The following steps describe how to create and maintain a private air-gapped copy of F5's modules for these secured customer environments. Creating your air-gapped copy of the modules you need This example uses a personal GitHub account as an analog for air-gapped targets. So, we can't use the fork feature of github.com to create the copy. For this approach, we're assuming a workstation that has access to both the source repository host and the target repository host. So, not truly fully air-gapped. We'll show a workflow using git bundle in the future. Retrieve remote URL for one of the modules at F5's devcentral GitHub account Export the remote URL for the source repository export MODULEGITHUBURL="git@github.com:f5devcentral/terraform-aws-bigip-module.git" Create a repository on target air-gapped host Follow the appropriate directions for the air-gapped hosted Git (BitBucket, GitLab, GitHub Enterprise, etc.). And, retrieve the remote url for this repository. Export the remote URL for the air-gapped repository Note: The air-gapped repository is still empty at this point. Note: The example is using github.com, your real-world use will be using your internal git host export MODULEAIRGAPURL="git@github.com:myteamsaccount/localmodulerepo.git" Clone the module source repository This example uses F5's module for Azure git clone $MODULEGITHUBURL Add the target repository as an additional remote Again, we're using F5's AWS module as an example. We're using the remote url exported as MODULEAIRGAPURL to create the additional git repository remote. cd terraform-aws-bigip-module git remote add airgap $MODULEAIRGAPURL Pass the latest to the air-gapped repository Note: In the example below we're pushing the main branch. In some older repositories, the primary repository branch may still be named master . Note: Pushing the tags into the airgap repository is critical to version management of the modules. # get the latest from the origin repository git fetch origin # push any changes to the airgap repository git push airgap main # push all repository tags to the airgap repository git push --tags airgap Using your air-gapped copy of the modules Identify the module version to use This lists all of the tags available in the repository. git tag e.g. 0.9.2 v0.9 v0.9.1 v0.9.3 v0.9.4 v0.9.5 Review new versions for environment acceptance At this point, your organization should perform any acceptance testing of the new tags prior to using them in production environments. Source reference in Terraform module using git Unlike using the Terraform Registry, when using git as your module resource the version reference is included in the source URL. The source reference is the prefix git:: followed by the remote URL of the airgap repository, followed by ?ref= , finally followed by the tag identified in the previous step. Note: We are referencing the airgap repository, NOT the origin repository. Note: It is highly recommended to include the version reference in the URL. If the reference is not included in the URL, the latest commit to the default branch will be used at apply time. This means that the results of an apply will be non-deterministic, causing unexpected results, possibly service disruptions. module "bigip" { source = "git::https://github.com/myteamsaccount/localmodulerepo.git?ref=v0.9.3" ... } Check out Terraform for more detailed configuration requirements Source reference in Terraform module using a private Terraform registry If you have an instance of Terraform Enterprise it's possible to connect the private git repository created above to the [private module registry(https://www.terraform.io/docs/enterprise/admin/module-sharing.html)] available in Terraform Enterprise. module "bigip" { source = "privateregistry/modulereference" version = "v0.9.3" ... } Maintaining your air-gapped copy of the modules On-going maintenance of private repository Once the repository is established, perform the following actions whenever you want to retrieve the latest versions of the F5 modules. If you have a registry enabled on Terraform Enterprise, it should update automatically when the private repository is updated. # get the latest from the origin repository git fetch origin # push any changes to the airgap repository git push airgap main # push all repository tags to the airgap repository git push --tags airgap Review new versions for environment acceptance When your private repository is updated, do not forget to perform any acceptance testing you need to validate compliance and compatibility with your environment's expectations. Other references Installing and running iControl extensions in isolated GCP VPCs Deploy BIG-IP on GCP with GDM without Internet access1.6KViews1like0CommentsManage Infrastructure and Services Lifecycle with Terraform and Ansible + Demo
Working as a Solution Architect for F5, Ioften need to have access to a lab environment. 'Traditionally', the method to implement a lab was to leverage tools like Vagrant,VMWare,or others. A lab environment on a laptop is limited by its computing capacities (CPU/Memory/disk/...).Today we are often asked to show how we can integrate our solutions with many different tools(Orchestration solutions, Version Control systems, CI Servers, containerized environments, ...). Except if your laptop is a powerful one, it's difficult to build such an environment and have itrun smoothly. If the lab requirements are too demanding for my laptop, Iwould access one of our lab facility to do my work. Thisapproach itself is fine but bring some challenges: If you travel like Ido, latency can become a hindrance and be frustrating. Lab facilities leverage "shared resources". Which means you may face issues due toconflicting IP addresses, switch misconfiguration, maintenance operations, ... Some resources may already be reserved/used by another fellow colleague and not be available. You may also face other constraints making both deployment models difficult: Need to share access to the lab. Not easy when it runs on your laptop or in a private cloud that is not always opened to the outside world. People may need to be able to replicate your lab in their own environment. Stability/time needed for maintenance: Using a lab over and over will make it messy. You usually At some point, you'll reach a stage where you want to create a "new" environment that is clean and "trustworthy" (until you played too much with it again) I'm sure i've missed other constraints but you get the idea: maintaining a lab and using it in a collaborativemanner is challenging. Luckily, it's easier today to achieve those objectices: Leverage Public Cloud! Public Cloud gives you access to "unlimited" computing services over Internet that can be automated/orchestrated. With Public Cloud, you have access to an API allowing you to spin up a new environment with all therelevant tools deployed. This way, you may go straight into work (after enjoying a nice cup of coffee/tea while yourinfrastructure is being deployed! ).Once your work is done, you can destroy this environment and save money. When you'll need a lab again, you'll be able to spin a new/clean environment in a matter of minutes and be confident that it's a "healthy lab" When working on Automation/Orchestration of Public cloud environments, I see two dominant tools: Terraform andAnsible. https://www.terraform.io Terraform is an open source command line tool that can be used to provision an infrastructure on dozensof different platforms and services (AWS, Azure, ...).One of the strength of Terraform is that it is declarative: You specify the expected "state" of yourinfrastructure and Terraform will take care of all the underlying complexities (Does it need to be provisioned? Should I update the settings of a component? Which components should be created first? Do we need to deleteresources that are not required anymore, ... ).Terraform will store the "state" of your infrastructure and configuration to be more efficient in its work. https://www.ansible.com Ansible is a provisioning and configuration management tool. It is designed to automate application deployments.One of the strength of Ansible is that it doesn't require any "agents" to run on the targetted systems. Ansibleworks by leveraging "Modules". Those modules are consumed to define the "state" of the targetted systems. They areusually executed over SSH (by default). So how to leverage those tools to have a lab available on-demand? In the following demo, we will: Leverage Terraform to manage the lifecycle of a new AWS environment: manage a dedicated VPC with external/internal subnets, Ubuntu instances, F5 solution) In addition to deploying our infrastructure, it will generate the relevant files for Ansible (inventory file to know theIPs of our systems, ansible variable files to know how to configure the F5 solution with AS3) Use Ansible to manage the configuration of our systems: update our ubuntu instances, install NGINX Web serviceon our Ubuntu instances, deploy a standard F5 configuration to load balance our web application with AS3 Here is a summary for the demo: Demo time! By leveraging tools like Terraform or Ansible (you can achieve the same results with other tools), it is easy to handle thelifecycle of an infrastructure and the services running on top of it. This is what people IaC (Infrastructure as Code) Useful links:- If you want to learn more about the setup of this demo, it is posted on Github: here- F5 provides a list of templates to automate deployment in public cloud. It's available here: AWS Templates, Azure Templates, GCP Templates- F5 Application Services 3 (AS3) documentation/examples: here- If you want to learn more about our API and how to automate/orchestrate F5 solutions (free training): F5 A&O Training1KViews2likes1CommentManage BIG-IPs in Azure using Terraform Cloud
Introduction In this article I’ll outline a suite of demonstration resources designed to help you and your IT team explore the possibilities of applying DevOps practices in your own environments. The demonstration resources described below show how tools likeGit, HashiCorpTerraform, HashiCorpSentinel, ChefInspecand F5'sAutomation Toolchaincan be used to introduce some of the practices listed above to F5 BIG-IPs and the IT services they help deliver. By following along with theREADMEin thedemonstration repositoryand the video walk-throughs listed below, you should be able to run this demonstration on your own. Software Delivery Key Practices IT Industry research, such asAccelerate, shows improving a company's ability to deliver software has a significant positive benefit to their overall success. The following practices and design principles are cornerstones to that improvement. Version control of code and configuration Automation of Deployment Automation of Testing and Test Data Management "Shifting Left" on Security Loosely Coupled Architectures Pro-active Notification Caveats These repositories use simplifying demonstration shortcuts for password, key, and network security. Production-ready enterprise designs and workflows should be used in place of these shortcuts. DO NOT ASSUME THAT THE CODE AND CONFIGURATION IN THESE REPOSITORIES IS PRODUCTION-READY The particular source control approach shown in this demonstration is one of many. Before using this approach to support your Infrastructure as Code and Configuration Management assets and workflows, you should learn aboutdifferent patterns of source code managementand determine what best fits your team's needs. A variety of tools are used in this demonstration. In most cases they are not exclusively required and can be replaced with other similar tools. The demonstration uses a licensed version of Terraform Cloud in order to demonstrate the capabilities of HashiCorp Sentinel. If you are using the free version of Terraform Cloud you won't be able to try the policy compliance use-cases, and the rest of the demonstration code should work as expected. Setting up your demonstration automation host Before running the demonstration code, you'll need to set up the IDE host and the Azure account. Instructions for those steps arehere Video walk-throughs Fork the repository and open it in Visual Studio Code (1m36s) Once the tools are installed, you can create your own copy of the repository and open it in your IDE. In the videos, Visual Studio Code is used as the IDE. In order to follow along, you'll need to create your own repository in order to set up the Terraform Cloud configuration and make your own adjustments to build configuration (e.g. the number of application servers deployed) Set up a Terraform Cloud workspace (1m38s) Before running the Terraform Cloud workflow, a Terraform Cloud workspace is required. This video steps you through manually configuring the workspace and linking it to your cloned repository. Programmatically set up Terraform Cloud workspaces for production, test, and development (10m40s) Setting up the workspaces programmatically has the benefits of rapid consistent results and executable knowledge in the form of scripts and configuration files. In this video we step you through programmatically building workspaces for production, test, and development environments usingthis repository. We also programmatically configure simple source-controlled compliance Sentinal policies. Initial build of production, test, and development (7m59s) Everything should be ready for the first build of your production, test, and development environments. In this video, we step you through manually triggering Terraform Cloud builds. In addition, we'll see the impact of Sentinel policies in use, how to override policies that have been triggered, and the audit trail that results. Automated testing of production (4m18s) Once your environment builds have completed, it's critical to validate that they are fit for use. In this video, we step you through a simple set of tests that validate the readiness of the F5 BIG-IPs built by the Terraform Cloud workflow. These tests are not comprehensive, but demonstrate the benefits of an executable "definition of done." The source of an updated version of the Inspec tests used in the demonstration ishere. Manual inspection of production (2m45s) In this video, we walk through the BIG-IPs that were built in the production environment. We inspect the virtual servers and their associated pools, noting the number of application servers that were built and joined to the pool. Programmatically add application servers and include them in the BIG-IP virtual server (8m6s) In this video, we explore the use-case of expanding the pool in the previously built production environment, using a simple change in source control. We'll see the Terraform Cloud workflow automatically trigger a new build based on a merge commit to your cloned repository. New application servers will be built and automatically added to the pool by F5'sService Discovery iApp. Update WAF from a source control repository(no video walk-through) We leave it as an exercise for the reader (or possibly an updated video) to look for the WAF deployed with the virtual server. The WAF is retrieved from source controlhere. In addition, you can experiment with changing the version of the WAF in theAS3 templatein the stanza shown below. Usable values for versions are 0.1.0, 0.1.1, 0.2.0, and 0.2.1. If you choose to do this, follow the same workflow shown in the previous video about scaling the number of application servers. "ASM_Policy": { "class": "WAF_Policy", "url": "https://github.com/mjmenger/waf-policy/raw/0.1.1/asm_policy.xml", "ignoreChanges": false } What's next? If you've followed along through the all of the use-cases in the demonstration repository, you have seen the following: Source-controlled build of an application environment, including BIG-IPs, virtual servers, pools, and WAF policies. Managed changes with logging of authoring and approvals. Automated scaling of application resources and BIG-IP configuration. Automated updates to BIG-IP WAF policies. If you want to realize the benefits of these practices for your IT service delivery, please reach out to your F5 account team.1KViews2likes0CommentsInfrastructure as Code: Automating F5 Distributed Cloud CEs with Ansible
Introduction Welcome to the first installment of our Infrastructure as Code (IaC) series, focusing on F5 products and Ansible. This series has been a long-standing desire of mine to showcase the ability of IaC utilizing Ansible Automation Platform to deliver Day 0 through Day 2 operations with multiple F5 virtualized platforms. Over time, I've encountered numerous financial clients expressing interest in this topic. For many of these clients, the prospect of leveraging IaC to redeploy an environment outweighs the traditional approach of performing upgrades. This series will hopefully provide insight, documentation, and code for anyone embarking on this journey. Why Ansible Automation Platform? Like most people, I started my journey with community editions of Ansible. As my coding became more complex, so did the need to ensure that my lab infrastructure adhered to the best security guidelines required by my company (my goal being to mimic how customers would/should do things in real life). I began utilizing Ansible Automation Platform to ensure my credentials were protected, as well as to organize and share my code with the rest of my team (following the 'just in case you got hit by a bus' theory). Ansible Automation Platform utilizes execution environments (EE) to ensure code runs efficiently and cleanly every time. Now, I am also creating Execution Environments via GitHub with workflows and pushing them up to Quay.io (https://github.com/VDI-Tech-Guy/f5-execution-engines). Huge thanks to Colin McNaughton at Red Hat for making my life so much easier with building EEs! Why deploy F5 Distributed Cloud on VMware vSphere? As I mentioned before, I had this desire to build this Infrastructure as Code (IaC) code a while back. This was prior to the Broadcom acquisition of VMware. Being an ex-VMware employee, I had a lot of knowledge of virtualization platform infrastructure going into this project, and I started my focus on deploying on VMware vSphere. F5 Distributed Cloud can be deployed in any cloud, anywhere. However, I really wanted to focus on on-premises deployments because not every customer can afford the cloud. Moreover, there's always a back-and-forth battle between on-premises and the cloud, which has evolved into the Hybrid Cloud and the Multi-Cloud. I do intend to extend this series to the Multi-Cloud, but these initial deployments will be focused on VMware vSphere, as it is still utilized in many organizations across the globe. Information about the Setup in the Demo Video If you watch the video (down below) on how the deployment works, you can see i did a bunch of the pre-work prior to launching the deployment, in the git repostory (link in Resources). Here are some Prework items i did Had a fully functional Ansible Automation Platform 2.4+ enviornment setup and working. (at the time the controller version was 4.4.4) Execution Environment was imported into Ansible Automation Platform Controller The Project was setup to import the Playbooks from the Git Repository (In Resources Section below) and setup the Default Execution Environment Demo Inventory was setup (in our usecase we only needed the vCenter Host) We Setup Network Credentials for the vCenter The Template was setup and had Variables populated in it (Note the API Key was hidden). As mentioned in the Video (Below) The variables were populated to my environment, this contains all the information, i have provided a Demo Example in the git repository for anyone to mimic my settings to their environment, also the example has comments about each field or area of a field and the purpose of the variable. { "rhel_location": "https://vesio.blob.core.windows.net/releases/rhel/9/x86_64/images/vmware/rhel-9.2023.29-20231212012955-single-nic.ova", "xc_api_credential": "_____________________________________", "xc_namespace": "mmabis-automation", "xc_console_host": "f5-bd", "xc_user": "admin", "xc_pass": "Ansible123!", "vcenter_hostname": "{{ ansible_host }}", "vcenter_username": "{{ ansible_env.ANSIBLE_NET_USERNAME }}", "vcenter_password": "{{ ansible_env.ANSIBLE_NET_PASSWORD }}", "vcenter_validate_certs": false, "datacenter_name": "Apex", "cluster_name": "Worlds-Edge", "datastore": "TrueNAS-SSD", "dvs_switch_name": "DSC-DVS", "dns_name_servers": [ "192.168.192.20", "192.168.192.1" ], "dns_name_search": [ "dsc-services.local", "localdomain" ], "ntp_servers": [ "0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org" ], "domain_fqdn": "dsc-services.local", "DVS_Name": "{{dvs_switch_name}}", "Internal_Network": "DVS-Server-vLan", "External_Network": "DVS-DMZ-vLan", "resource_pool_name": "Lab-XC", "waiting_period": 2, "temp_download_location": "/tmp/xc-ova-download.ova", "xc_ova_builds": [ { "hostname": "xc-automation-rhel-demo", "tmpl_name": "xc-automation-rhel-demo", "admin_password": "Ansible123!", "cluster_name": "xc-automation-cluster-rhel-demo", "dhcp": "no", "external_ip": "172.16.192.170", "external_ip_subnet_prefix": "24", "external_ip_gw": "172.16.192.1", "external_ip_route": "0.0.0.0/0", "internal_ip": "192.168.192.170", "internal_ip_subnet_prefix": "22", "internal_ip_gw": "192.168.192.1", "certified_hw": "vmware-regular-nic-voltmesh", "latitude": "39.51833126", "longitude": "-104.759496962", "build_count": 3, "nic_config": "rhel-multi" } ] } Launching the Code With all of that prework Handled it was as easy as launch the code, there were a few caviats i learned over time when dealing with the atuomation that i wanted to share. Never re-use a cluster name in F5 Distributed Cloud, especially if it was used in a different version of the CE (there were communications issues with the CEs and previous cluster information that was stored in F5 Distributred Cloud Console) The Api Credentials are system level when trying to accept registration or create the token for importing in to the environment. This code is designed to check for "{{ xc-namespace}}-token" if it exists then it will utilize the existing token, if not it will try to create it so you need system level permissions to do this. Build Count should be 3 by default (still needs to be defined) or an ODD number based on recomendations i have heard from our F5 Field. If there are more that i think of ill definatly edit the post and make sure its up-to-date. When launching the code i was able to get the lab to build up correctly multiple times, so please if there is an issue or something i might not have documented well, feel free to let me know and give it a shot for yourself! YouTube Video now on DevCentral Channel Resources https://github.com/f5devcentral/f5-bd-ansible-day0-automation - The Code utilized for this deployment https://github.com/VDI-Tech-Guy/f5-execution-engines - Building Execution Environments with Github and Workflows Conclusion I do hope that this series will help everyone who wants to embrace IaC and if you have any questions feel free to reach out!499Views3likes0Comments