11-Oct-2021 10:36 - edited 11-Feb-2022 13:41
It is not uncommon for companies to have applications deployed independently in AWS, Azure and GCP. When these applications are required to communicate with each other, these companies must deal with operational overhead and new set of challenges such as skills gap, patching security vulnerabilities and outages, leading to bad customer experience. Setting up individual centers of excellence for managing each cloud is not the answer as it leads to siloed management and often proves costly. This is where F5 Distributed Cloud Mesh can help. Using F5® Distributed Cloud Mesh, you can establish secure connectivity with minimal changes to existing application deployments. You can do so without any outages or extended maintenance windows.
In this blog we will go over a multi-cloud scenario in which we will establish secure connectivity between applications running in AWS and Azure. To show this, we will follow these steps
F5® Distributed Cloud terraform provider can be used to configure Distributed Cloud Mesh Objects and these objects represent desired state of the system. The desired state of the system could be configuring a http/tcp load balancer, vk8s cluster, service mesh, enabling api security etc. Terraform F5® Distributed Cloud provider has more than 100 resources and data resources. Some of the resources which will be using in this example are for Distributed Cloud Services like Cloud HTTP load balancer, F5® Distributed Cloud WAAP and F5® Distributed Cloud Sites creation in AWS and Azure. You can find a list of resources here.
Here are the steps to deploy simple application using F5® Distributed Cloud terraform provider on AWS & Azure. I am using below repository to create the configuration. You can also refer to the README on F5's DevCentral Git.
git clone https://github.com/f5devcentral/f5-digital-customer-engagement-center.git
cd f5-digital-customer-engagement-center/
git checkout mcn # checkout to multi cloud branch
cd solutions/volterra/multi-cloud-connectivity/ # change dir for multi cloud scripts
customize admin.auto.tfvars.example as per your needs
cp admin.auto.tfvars.example admin.auto.tfvars
./setup.sh # Run setup.sh file to deploy the Volterra sites which identifies services in AWS, Azure etc.
./aws-setup.sh # Run aws-setup.sh file to deploy the application and infrastructure in AWS
./azure-setup.sh # Run azure-setup.sh file to deploy the application and infrastructure in Azure
This will create the following objects on AWS and Azure
F5® Distributed Cloud Mesh does all the stitching of the VPCs and VNETs for you, you don’t need to create any transit gateway, also it stitches VPCs & VNETs to the F5® Distributed Cloud Application Delivery Network. Client when accessing backend application will use the nearest F5® Distributed Cloud Regional network http load balancer to minimize the latency using Anycast.
Run setup.sh script to deploy the F5® Distributed Cloud sites this will create a virtual sites that will identify services deployed in AWS and Azure.
./setup.sh
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of volterraedge/volterra from the dependency lock file
- Using previously-installed hashicorp/random v3.1.0
- Using previously-installed volterraedge/volterra v0.10.0
Terraform has been successfully initialized!
random_id.buildSuffix: Creating...
random_id.buildSuffix: Creation complete after 0s [id=c9o]
volterra_virtual_site.site: Creating...
volterra_virtual_site.site: Creation complete after 2s [id=3bde7bd5-3e0a-4fd5-b280-7434ee234117]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
buildSuffix = "73da"
volterraVirtualSite = "scsmcn-site-73da"
created random build suffix and virtual site
aws-setup.sh file to deploy the vpc, webservers and jump host, http load balancer, F5® Distributed Cloud aws site and origin servers
./aws-setup.sh
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of volterraedge/volterra from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Using previously-installed hashicorp/null v3.1.0
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed volterraedge/volterra v0.10.0
- Using previously-installed hashicorp/aws v3.60.0
- Using previously-installed hashicorp/random v3.1.0
Terraform has been successfully initialized!
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# data.aws_instances.volterra["bu1"] will be read during apply
# (config refers to values not yet known)
<= data "aws_instances" "volterra" {
+ id = (known after apply)
+ ids = (known after apply)
..... truncated output ....
volterra_app_firewall.waf: Creating...
module.vpc["bu2"].aws_vpc.this[0]: Creating...
aws_key_pair.deployer: Creating...
module.vpc["bu3"].aws_vpc.this[0]: Creating...
module.vpc["bu1"].aws_vpc.this[0]: Creating...
aws_route53_resolver_rule_association.bu["bu3"]: Creation complete after 1m18s [id=rslvr-rrassoc-d4051e3a5df442f29]
Apply complete! Resources: 90 added, 0 changed, 0 destroyed.
Outputs:
bu1JumphostPublicIp = "54.213.205.230"
vpcId = "{\"bu1\":\"vpc-051565f673ef5ec0d\",\"bu2\":\"vpc-0c4ad2be8f91990cf\",\"bu3\":\"vpc-0552e9a05bea8013e\"}"
azure-setup.sh will execute terraform scripts to deploy webservers, vnet, http load balancer , origin servers and F5® Distributed Cloud azure site.
./azure-setup.sh
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of volterraedge/volterra from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed volterraedge/volterra v0.10.0
- Using previously-installed hashicorp/random v3.1.0
- Using previously-installed hashicorp/azurerm v2.78.0
Terraform has been successfully initialized!
..... truncated output ....
azurerm_private_dns_a_record.inside["bu11"]: Creation complete after 2s [id=/subscriptions/187fa2f3-5d57-4e6a-9b1b-f92ba7adbf42/resourceGroups/scsmcn-rg-bu11-73da/providers/Microsoft.Network/privateDnsZones/shared.acme.com/A/inside]
Apply complete! Resources: 58 added, 2 changed, 12 destroyed.
Outputs:
azureJumphostPublicIps = [
"20.190.21.3",
]
After running terraform script you can sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on System on the left and then Site List to list the sites, you can also enter into search string to search a particular site, Below you can find list of virtual sites deployed for Azure and AWS, status of these sites can be seen using F5® Distributed Cloud Console.
Now in order to see the connectivity of sites to the Regional Edges, click System --> Site Map --> Click on the appropriate site you want to focus and then Connectivity --> Click AWS, Below you can see the AWS virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on AWS. Provides system and application level metrics.
Below you can see the Azure virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on Azure. Provides system and application level metrics.
To check the status of the application, sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on the application tab --> HTTP load balancer --> select appropriate load balancer --> click Request. Below you can see various matrices for applications deployed into the AWS and Azure cloud, you can see latency at different levels like client to lb, lb to server and server to application. Also it provides HTTP requests with Error codes on application access.
F5® Distributed Cloud Console helps many operational tasks like visibility into request types, JSON payload of the request indicating browser type, device type, tenant and also request came on which http load balancers and many more details.
OpEx Reduction: Single simplified stack can be used to manage apps in different clouds. For example, the burden of configuring security policies at different locations is avoided. Also transit cost associated with public cloud can be eliminated.
Reduce Operational Complexity: Network expert is not required as F5® Distributed Cloud Console provides simplified way to configure and manage network and resources both at customer edge location and public cloud. Your NetOps or DevOps person can easily deploy the infrastructure or applications without network expertise. Adoption of a new cloud provider is accelerated.
App User Experience: Customers don’t have to learn different visibility tools, F5® Distributed Cloud Console provided end to end visibility of applications which results in better user experience. Origin server or LB can be moved closer to the customer which reduces latency for apps and APIs which results in better experience.