Terraform Automation
2 TopicsBIG-IP Orchestrate in private Data Center using Terraform Cloud Agent
What is Terraform Cloud? Terraform Cloud offers organizations a unified workflow for provisioning their cloud, private data center, and SaaS infrastructure, ensuring continuous infrastructure management throughout its entire lifecycle. What is F5 BIG-IP? BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. It helps in doing Application delivery server load balancing of applications securely and at scale. BIG-IP can be deployed in private or public clouds. BIG-IP in private Data Center When BIG-IP is in a private data center, it has a private IP, making it tricky to reach with tools like Terraform Cloud from the outside. However, if you're dealing with both private and public clouds using Terraform Cloud, you can use Terraform Cloud agents. These agents help control BIG-IP in private data centers, even when the IP isn't accessible externally. BIG-IP supports Application Services 3 (AS3) and FAST templates, presenting a highly synergistic relationship with Terraform. This synergy is particularly pronounced due to the availability of the BIG-IP Terraform provider, coupled with dedicated resources designed specifically for the deployment of AS3 and FAST templates. AS3 and FAST templates serve as powerful tools for configuring and managing BIG-IP application services. AS3 simplifies the process of defining, managing, and deploying application-related configurations, providing a declarative model for specifying how applications should be set up on BIG-IP devices. FAST, on the other hand, extends automation capabilities by incorporating telemetry functionalities, and enhancing monitoring and reporting capabilities. The integration with Terraform is pivotal in this context, as the BIG-IP Terraform provider facilitates the seamless incorporation of AS3 and FAST templates into infrastructure-as-code (IaC) workflows. The example Terraform configuration is at https://github.com/scshitole/privateDC How to orchestrate BIG-IP in a private Data Center? In this illustrative scenario, the BIG-IP system is operational within a private Data Center. A Virtual Machine has been configured to host the Terraform Cloud agent, encapsulated within a container. The Terraform configuration pertinent to our deployment resides in the GitHub repository: https://github.com/scshitole/privateDC. Let us now turn our attention to the Terraform Agent—an agile and lightweight component capable of executing within a container on a Virtual Machine. Its primary function is to establish and maintain a secure connection with Terraform Cloud, perpetually polling for instructions. Notably, this agent not only retrieves directives from Terraform Cloud but also acquires essential information regarding the Terraform workspace and the TF configuration. What distinguishes this agent is its seamless operation without necessitating alterations to existing firewall rules. Its communication with Terraform Cloud transpires over HTTPS and only demands appropriately configured DNS settings. Upon queuing a Terraform plan, the control plane initiates the dispatch of the configuration to the agent. Subsequently, the agent diligently retrieves the workspace and orchestrates the deployment of the configuration onto the BIG-IP infrastructure. What advantages does this solution offer? Automation Workflows with Terraform Cloud: By leveraging Terraform Cloud, we gain the capability to establish automated workflows for configuring BIG-IP. This not only streamlines the configuration process but also enhances efficiency through the power of automation. Additionally, Terraform Cloud enables the orchestration of BIG-IP Web Application Firewall (WAF) configurations in a Hybrid Cloud environment, providing a comprehensive solution for managing security across diverse infrastructures. Enhanced Security with Maintained Private IPs and Credentials: The solution ensures a robust security posture by maintaining the confidentiality of the infrastructure's private IP addresses and credentials. This practice prevents security sprawls and unauthorized access attempts, fortifying the integrity of the entire system. Seamless BIG-IP Configuration Migration: The flexibility of BIG-IP configuration migration is a notable advantage, allowing for a smooth transition between private and public cloud environments. This bidirectional migration capability ensures adaptability to evolving infrastructure needs, facilitating a seamless shift of BIG-IP configurations as organizational requirements dictate. Whether moving configurations from a private cloud to a public cloud or vice versa, this capability provides agility and scalability in infrastructure management. How to set up configuration on Terraform Cloud? o Once logged into Terraform Cloud, choose your organization from the available options. o Go to the Projects & Workspaces section and opt for the Version Control Workflow. o The BIG-IP Terraform configuration template resides in the GitHub repository; please choose the relevant repository. o Choose the correct GitHub repository; it should be visible here. o Now, provide a name for the workspace; feel free to select something relevant, but it must be unique. o Enter the variables here, including details such as the BIG-IP's IP address, username, and password. Ensure to choose the HCL option, and if needed, you can set it as invisible. o Then, go to the established workspace and click on "New run. o Go to the Agents section and select Create Agent Pool. o Enter a fitting name for the Agent Pool, as illustrated. You can opt for a unique name matching the workspace for easy identification, although it's not mandatory. o Provide a suitable description for the Agent Pool, explaining its specific purpose or activities. Then, proceed to click on "Generate Token"; you will require this token when running the agent. o Copy the newly generated token and follow the outlined steps to configure your agents. Run the provided docker command, including essential environment variables like TFC_AGENT_TOKEN and TFC_AGENT_NAME. If desired, you can also run the docker in the background using the appropriate docker command option. Key Take Away Terraform Cloud streamlines infrastructure provisioning and management across various environments for consistent lifecycle control. Terraform Cloud agents enable effective orchestration of BIG-IP configurations in private data centers, addressing challenges associated with private IPs. The seamless integration of BIG-IP, AS3, FAST templates, and Terraform supports efficient infrastructure-as-code workflows, especially beneficial in multi-cloud setups. Terraform Cloud facilitates automated workflows, simplifies BIG-IP configurations, and supports orchestrating Web Application Firewall (WAF) setups in Hybrid Cloud environments. Emphasizing security, the solution maintains the confidentiality of private IPs and credentials, preventing security sprawl and unauthorized access. The solution offers flexibility by allowing seamless BIG-IP configuration migration between private and public cloud environments, ensuring adaptability and scalability. For more details, please watch the accompanying video https://youtu.be/RgCqnDxpf3E209Views1like0CommentsUsing Terraform and F5® Distributed Cloud Mesh to establish secure connectivity between clouds
It is not uncommon for companies to have applications deployed independently in AWS, Azure and GCP. When these applications are required to communicate with each other, these companies must deal with operational overhead and new set of challenges such as skills gap, patching security vulnerabilities and outages, leading to bad customer experience. Setting up individual centers of excellence for managing each cloud is not the answer as it leads to siloed management and often proves costly. This is where F5 Distributed Cloud Mesh can help. UsingF5® Distributed Cloud Mesh, you can establish secure connectivity with minimal changes to existing application deployments. You can do so without any outages or extended maintenance windows. In this blog we will go over a multi-cloud scenario in which we will establish secure connectivity between applications running in AWS and Azure. To show this, we will follow these steps Deploy simple application web servers and VPC, VNETS in AWS and Azure respectively using Terraform. Create virtualF5® Distributed Cloud sites for AWS and Azure using Terraform provider for Distributed Cloud platform. These virtual sites provide abstraction for AWS VPCs and AZURE VNETs which can then be managed and used in aggregate. Use terraform provider to configureF5® Distributed Cloud Mesh ingress and egress gateways to provide connectivity to the Distributed Cloud backbone. Configure services such as security policies, DNS, HTTP Load balancer andF5® Distributed Cloud WAAP which are required to establish secure connectivity between applications. Terraform provider for F5® Distributed Cloud F5® Distributed Cloud terraform provider can be used to configure Distributed Cloud Mesh Objects and these objects represent desired state of the system. The desired state of the system could be configuring a http/tcp load balancer, vk8s cluster, service mesh, enabling api security etc. Terraform F5® Distributed Cloud provider has more than 100 resources and data resources. Some of the resources which will be using in this example are for Distributed Cloud Services like Cloud HTTP load balancer, F5® Distributed Cloud WAAP and F5® Distributed Cloud Sites creation in AWS and Azure. You can find a list of resources here. Here are the steps to deploy simple application usingF5® Distributed Cloud terraform provider on AWS & Azure. I am using below repository to create the configuration. You can also refer to the READMEon F5's DevCentral Git. git clone https://github.com/f5devcentral/f5-digital-customer-engagement-center.git cd f5-digital-customer-engagement-center/ git checkout mcn # checkout to multi cloud branch cd solutions/volterra/multi-cloud-connectivity/ # change dir for multi cloud scripts customize admin.auto.tfvars.example as per your needs cp admin.auto.tfvars.example admin.auto.tfvars ./setup.sh # Run setup.sh file to deploy the Volterra sites which identifies services in AWS, Azure etc. ./aws-setup.sh # Run aws-setup.sh file to deploy the application and infrastructure in AWS ./azure-setup.sh # Run azure-setup.sh file to deploy the application and infrastructure in Azure This will create the following objects on AWS and Azure 3 VPC and VNET networks on each cloud respectively 3F5® Distributed Cloud Mesh nodes on each cloud seen as master-0 3 backend application on each cloud seen as scsmcn-workstation here projectPrefix is scsmcn in admin.auto.tfvars file 1 jump box on each cloud to test Create 6 http load balancers one for each node and can be accessed through F5® Distributed Cloud Console Create 6F5® Distributed Cloud sites which can be accessed via F5® Distributed Cloud Console F5® Distributed Cloud Mesh does all the stitching of the VPCs and VNETs for you, you don’t need to create any transit gateway, also it stitches VPCs & VNETs to the F5® Distributed Cloud Application Delivery Network. Client when accessing backend application will use the nearestF5® Distributed Cloud Regional network http load balancer to minimize the latency using Anycast. Run setup.sh script to deploy theF5® Distributed Cloud sites this will create a virtual sites that will identify services deployed in AWS and Azure. ./setup.sh Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of volterraedge/volterra from the dependency lock file - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed volterraedge/volterra v0.10.0 Terraform has been successfully initialized! random_id.buildSuffix: Creating... random_id.buildSuffix: Creation complete after 0s [id=c9o] volterra_virtual_site.site: Creating... volterra_virtual_site.site: Creation complete after 2s [id=3bde7bd5-3e0a-4fd5-b280-7434ee234117] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: buildSuffix = "73da" volterraVirtualSite = "scsmcn-site-73da" created random build suffix and virtual site aws-setup.sh file to deploy the vpc, webservers and jump host, http load balancer,F5® Distributed Cloud aws site and origin servers ./aws-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/aws from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/null from the dependency lock file - Reusing previous version of hashicorp/template from the dependency lock file - Using previously-installed hashicorp/null v3.1.0 - Using previously-installed hashicorp/template v2.2.0 - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/aws v3.60.0 - Using previously-installed hashicorp/random v3.1.0 Terraform has been successfully initialized! An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.aws_instances.volterra["bu1"] will be read during apply # (config refers to values not yet known) <= data "aws_instances" "volterra" { + id = (known after apply) + ids = (known after apply) ..... truncated output .... volterra_app_firewall.waf: Creating... module.vpc["bu2"].aws_vpc.this[0]: Creating... aws_key_pair.deployer: Creating... module.vpc["bu3"].aws_vpc.this[0]: Creating... module.vpc["bu1"].aws_vpc.this[0]: Creating... aws_route53_resolver_rule_association.bu["bu3"]: Creation complete after 1m18s [id=rslvr-rrassoc-d4051e3a5df442f29] Apply complete! Resources: 90 added, 0 changed, 0 destroyed. Outputs: bu1JumphostPublicIp = "54.213.205.230" vpcId = "{\"bu1\":\"vpc-051565f673ef5ec0d\",\"bu2\":\"vpc-0c4ad2be8f91990cf\",\"bu3\":\"vpc-0552e9a05bea8013e\"}" azure-setup.sh will execute terraform scripts to deploy webservers, vnet, http load balancer , origin servers andF5® Distributed Cloud azure site. ./azure-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/azurerm from the dependency lock file - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed hashicorp/azurerm v2.78.0 Terraform has been successfully initialized! ..... truncated output .... azurerm_private_dns_a_record.inside["bu11"]: Creation complete after 2s [id=/subscriptions/187fa2f3-5d57-4e6a-9b1b-f92ba7adbf42/resourceGroups/scsmcn-rg-bu11-73da/providers/Microsoft.Network/privateDnsZones/shared.acme.com/A/inside] Apply complete! Resources: 58 added, 2 changed, 12 destroyed. Outputs: azureJumphostPublicIps = [ "20.190.21.3", ] After running terraform script you can sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on System on the left and then Site List to list the sites, you can also enter into search string to search a particular site, Below you can find list of virtual sites deployed for Azure and AWS, status of these sites can be seen using F5® Distributed Cloud Console. AWS Sites on F5® Distributed Cloud Console Now in order to see the connectivity of sites to the Regional Edges, click System --> Site Map --> Click on the appropriate site you want to focus and then Connectivity --> Click AWS, Below you can see the AWS virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on AWS. Provides system and application level metrics. Azure Sites on F5® Distributed Cloud Console Below you can see the Azure virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on Azure. Provides system and application level metrics. Analytics on F5® Distributed Cloud Console To check the status of the application, sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on the application tab --> HTTP load balancer --> select appropriate load balancer --> click Request. Below you can see various matrices for applications deployed into the AWS and Azure cloud, you can see latency at different levels like client to lb, lb to server and server to application. Also it provides HTTP requests with Error codes on application access. API First F5® Distributed Cloud Console F5® Distributed Cloud Console helps many operational tasks like visibility into request types, JSON payload of the request indicating browser type, device type, tenant and also request came on which http load balancers and many more details. Benefits OpEx Reduction: Single simplified stack can be used to manage apps in different clouds. For example, the burden of configuring security policies at different locations is avoided. Also transit cost associated with public cloud can be eliminated. Reduce Operational Complexity: Network expert is not required as F5® Distributed Cloud Console provides simplified way to configure and manage network and resources both at customer edge location and public cloud. Your NetOps or DevOps person can easily deploy the infrastructure or applications without network expertise. Adoption of a new cloud provider is accelerated. App User Experience: Customers don’t have to learn different visibility tools, F5® Distributed Cloud Console provided end to end visibility of applications which results in better user experience. Origin server or LB can be moved closer to the customer which reduces latency for apps and APIs which results in better experience.1.4KViews2likes3Comments