hashicorp
6 TopicsUsing Terraform and F5® Distributed Cloud Mesh to establish secure connectivity between clouds
It is not uncommon for companies to have applications deployed independently in AWS, Azure and GCP. When these applications are required to communicate with each other, these companies must deal with operational overhead and new set of challenges such as skills gap, patching security vulnerabilities and outages, leading to bad customer experience. Setting up individual centers of excellence for managing each cloud is not the answer as it leads to siloed management and often proves costly. This is where F5 Distributed Cloud Mesh can help. UsingF5® Distributed Cloud Mesh, you can establish secure connectivity with minimal changes to existing application deployments. You can do so without any outages or extended maintenance windows. In this blog we will go over a multi-cloud scenario in which we will establish secure connectivity between applications running in AWS and Azure. To show this, we will follow these steps Deploy simple application web servers and VPC, VNETS in AWS and Azure respectively using Terraform. Create virtualF5® Distributed Cloud sites for AWS and Azure using Terraform provider for Distributed Cloud platform. These virtual sites provide abstraction for AWS VPCs and AZURE VNETs which can then be managed and used in aggregate. Use terraform provider to configureF5® Distributed Cloud Mesh ingress and egress gateways to provide connectivity to the Distributed Cloud backbone. Configure services such as security policies, DNS, HTTP Load balancer andF5® Distributed Cloud WAAP which are required to establish secure connectivity between applications. Terraform provider for F5® Distributed Cloud F5® Distributed Cloud terraform provider can be used to configure Distributed Cloud Mesh Objects and these objects represent desired state of the system. The desired state of the system could be configuring a http/tcp load balancer, vk8s cluster, service mesh, enabling api security etc. Terraform F5® Distributed Cloud provider has more than 100 resources and data resources. Some of the resources which will be using in this example are for Distributed Cloud Services like Cloud HTTP load balancer, F5® Distributed Cloud WAAP and F5® Distributed Cloud Sites creation in AWS and Azure. You can find a list of resources here. Here are the steps to deploy simple application usingF5® Distributed Cloud terraform provider on AWS & Azure. I am using below repository to create the configuration. You can also refer to the READMEon F5's DevCentral Git. git clone https://github.com/f5devcentral/f5-digital-customer-engagement-center.git cd f5-digital-customer-engagement-center/ git checkout mcn # checkout to multi cloud branch cd solutions/volterra/multi-cloud-connectivity/ # change dir for multi cloud scripts customize admin.auto.tfvars.example as per your needs cp admin.auto.tfvars.example admin.auto.tfvars ./setup.sh # Run setup.sh file to deploy the Volterra sites which identifies services in AWS, Azure etc. ./aws-setup.sh # Run aws-setup.sh file to deploy the application and infrastructure in AWS ./azure-setup.sh # Run azure-setup.sh file to deploy the application and infrastructure in Azure This will create the following objects on AWS and Azure 3 VPC and VNET networks on each cloud respectively 3F5® Distributed Cloud Mesh nodes on each cloud seen as master-0 3 backend application on each cloud seen as scsmcn-workstation here projectPrefix is scsmcn in admin.auto.tfvars file 1 jump box on each cloud to test Create 6 http load balancers one for each node and can be accessed through F5® Distributed Cloud Console Create 6F5® Distributed Cloud sites which can be accessed via F5® Distributed Cloud Console F5® Distributed Cloud Mesh does all the stitching of the VPCs and VNETs for you, you don’t need to create any transit gateway, also it stitches VPCs & VNETs to the F5® Distributed Cloud Application Delivery Network. Client when accessing backend application will use the nearestF5® Distributed Cloud Regional network http load balancer to minimize the latency using Anycast. Run setup.sh script to deploy theF5® Distributed Cloud sites this will create a virtual sites that will identify services deployed in AWS and Azure. ./setup.sh Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of volterraedge/volterra from the dependency lock file - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed volterraedge/volterra v0.10.0 Terraform has been successfully initialized! random_id.buildSuffix: Creating... random_id.buildSuffix: Creation complete after 0s [id=c9o] volterra_virtual_site.site: Creating... volterra_virtual_site.site: Creation complete after 2s [id=3bde7bd5-3e0a-4fd5-b280-7434ee234117] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: buildSuffix = "73da" volterraVirtualSite = "scsmcn-site-73da" created random build suffix and virtual site aws-setup.sh file to deploy the vpc, webservers and jump host, http load balancer,F5® Distributed Cloud aws site and origin servers ./aws-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/aws from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/null from the dependency lock file - Reusing previous version of hashicorp/template from the dependency lock file - Using previously-installed hashicorp/null v3.1.0 - Using previously-installed hashicorp/template v2.2.0 - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/aws v3.60.0 - Using previously-installed hashicorp/random v3.1.0 Terraform has been successfully initialized! An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.aws_instances.volterra["bu1"] will be read during apply # (config refers to values not yet known) <= data "aws_instances" "volterra" { + id = (known after apply) + ids = (known after apply) ..... truncated output .... volterra_app_firewall.waf: Creating... module.vpc["bu2"].aws_vpc.this[0]: Creating... aws_key_pair.deployer: Creating... module.vpc["bu3"].aws_vpc.this[0]: Creating... module.vpc["bu1"].aws_vpc.this[0]: Creating... aws_route53_resolver_rule_association.bu["bu3"]: Creation complete after 1m18s [id=rslvr-rrassoc-d4051e3a5df442f29] Apply complete! Resources: 90 added, 0 changed, 0 destroyed. Outputs: bu1JumphostPublicIp = "54.213.205.230" vpcId = "{\"bu1\":\"vpc-051565f673ef5ec0d\",\"bu2\":\"vpc-0c4ad2be8f91990cf\",\"bu3\":\"vpc-0552e9a05bea8013e\"}" azure-setup.sh will execute terraform scripts to deploy webservers, vnet, http load balancer , origin servers andF5® Distributed Cloud azure site. ./azure-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/azurerm from the dependency lock file - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed hashicorp/azurerm v2.78.0 Terraform has been successfully initialized! ..... truncated output .... azurerm_private_dns_a_record.inside["bu11"]: Creation complete after 2s [id=/subscriptions/187fa2f3-5d57-4e6a-9b1b-f92ba7adbf42/resourceGroups/scsmcn-rg-bu11-73da/providers/Microsoft.Network/privateDnsZones/shared.acme.com/A/inside] Apply complete! Resources: 58 added, 2 changed, 12 destroyed. Outputs: azureJumphostPublicIps = [ "20.190.21.3", ] After running terraform script you can sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on System on the left and then Site List to list the sites, you can also enter into search string to search a particular site, Below you can find list of virtual sites deployed for Azure and AWS, status of these sites can be seen using F5® Distributed Cloud Console. AWS Sites on F5® Distributed Cloud Console Now in order to see the connectivity of sites to the Regional Edges, click System --> Site Map --> Click on the appropriate site you want to focus and then Connectivity --> Click AWS, Below you can see the AWS virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on AWS. Provides system and application level metrics. Azure Sites on F5® Distributed Cloud Console Below you can see the Azure virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on Azure. Provides system and application level metrics. Analytics on F5® Distributed Cloud Console To check the status of the application, sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on the application tab --> HTTP load balancer --> select appropriate load balancer --> click Request. Below you can see various matrices for applications deployed into the AWS and Azure cloud, you can see latency at different levels like client to lb, lb to server and server to application. Also it provides HTTP requests with Error codes on application access. API First F5® Distributed Cloud Console F5® Distributed Cloud Console helps many operational tasks like visibility into request types, JSON payload of the request indicating browser type, device type, tenant and also request came on which http load balancers and many more details. Benefits OpEx Reduction: Single simplified stack can be used to manage apps in different clouds. For example, the burden of configuring security policies at different locations is avoided. Also transit cost associated with public cloud can be eliminated. Reduce Operational Complexity: Network expert is not required as F5® Distributed Cloud Console provides simplified way to configure and manage network and resources both at customer edge location and public cloud. Your NetOps or DevOps person can easily deploy the infrastructure or applications without network expertise. Adoption of a new cloud provider is accelerated. App User Experience: Customers don’t have to learn different visibility tools, F5® Distributed Cloud Console provided end to end visibility of applications which results in better user experience. Origin server or LB can be moved closer to the customer which reduces latency for apps and APIs which results in better experience.1.4KViews2likes3CommentsUsing F5's Terraform modules in an air-gapped environment
Introduction IT Industry research, such as Accelerate, shows improving a company's ability to deliver software is critical to their overall success. The following key practices and design principles are cornerstones to that improvement. Version control of code and configuration Automation of Deployment Automation of Testing and Test Data Management "Shifting Left" on Security Loosely Coupled Architectures Pro-active Notification F5 has published Terraform modules on GitHub.com to help customers adopt deployment automation practices, focused on streamlining instantiation of BIG-IPs on AWS, Azure, and Google. Using these modules allows F5 customers to leverage their embedded knowledge and expertise. But we have limited access to public resources Not all customer Terraform automation hosts running the CLI or enterprise products are able to access public internet resources like GitHub.com and the Terraform Registry. The following steps describe how to create and maintain a private air-gapped copy of F5's modules for these secured customer environments. Creating your air-gapped copy of the modules you need This example uses a personal GitHub account as an analog for air-gapped targets. So, we can't use the fork feature of github.com to create the copy. For this approach, we're assuming a workstation that has access to both the source repository host and the target repository host. So, not truly fully air-gapped. We'll show a workflow using git bundle in the future. Retrieve remote URL for one of the modules at F5's devcentral GitHub account Export the remote URL for the source repository export MODULEGITHUBURL="git@github.com:f5devcentral/terraform-aws-bigip-module.git" Create a repository on target air-gapped host Follow the appropriate directions for the air-gapped hosted Git (BitBucket, GitLab, GitHub Enterprise, etc.). And, retrieve the remote url for this repository. Export the remote URL for the air-gapped repository Note: The air-gapped repository is still empty at this point. Note: The example is using github.com, your real-world use will be using your internal git host export MODULEAIRGAPURL="git@github.com:myteamsaccount/localmodulerepo.git" Clone the module source repository This example uses F5's module for Azure git clone $MODULEGITHUBURL Add the target repository as an additional remote Again, we're using F5's AWS module as an example. We're using the remote url exported as MODULEAIRGAPURL to create the additional git repository remote. cd terraform-aws-bigip-module git remote add airgap $MODULEAIRGAPURL Pass the latest to the air-gapped repository Note: In the example below we're pushing the main branch. In some older repositories, the primary repository branch may still be named master . Note: Pushing the tags into the airgap repository is critical to version management of the modules. # get the latest from the origin repository git fetch origin # push any changes to the airgap repository git push airgap main # push all repository tags to the airgap repository git push --tags airgap Using your air-gapped copy of the modules Identify the module version to use This lists all of the tags available in the repository. git tag e.g. 0.9.2 v0.9 v0.9.1 v0.9.3 v0.9.4 v0.9.5 Review new versions for environment acceptance At this point, your organization should perform any acceptance testing of the new tags prior to using them in production environments. Source reference in Terraform module using git Unlike using the Terraform Registry, when using git as your module resource the version reference is included in the source URL. The source reference is the prefix git:: followed by the remote URL of the airgap repository, followed by ?ref= , finally followed by the tag identified in the previous step. Note: We are referencing the airgap repository, NOT the origin repository. Note: It is highly recommended to include the version reference in the URL. If the reference is not included in the URL, the latest commit to the default branch will be used at apply time. This means that the results of an apply will be non-deterministic, causing unexpected results, possibly service disruptions. module "bigip" { source = "git::https://github.com/myteamsaccount/localmodulerepo.git?ref=v0.9.3" ... } Check out Terraform for more detailed configuration requirements Source reference in Terraform module using a private Terraform registry If you have an instance of Terraform Enterprise it's possible to connect the private git repository created above to the [private module registry(https://www.terraform.io/docs/enterprise/admin/module-sharing.html)] available in Terraform Enterprise. module "bigip" { source = "privateregistry/modulereference" version = "v0.9.3" ... } Maintaining your air-gapped copy of the modules On-going maintenance of private repository Once the repository is established, perform the following actions whenever you want to retrieve the latest versions of the F5 modules. If you have a registry enabled on Terraform Enterprise, it should update automatically when the private repository is updated. # get the latest from the origin repository git fetch origin # push any changes to the airgap repository git push airgap main # push all repository tags to the airgap repository git push --tags airgap Review new versions for environment acceptance When your private repository is updated, do not forget to perform any acceptance testing you need to validate compliance and compatibility with your environment's expectations. Other references Installing and running iControl extensions in isolated GCP VPCs Deploy BIG-IP on GCP with GDM without Internet access1.7KViews1like0CommentsPushing Updates to BIG-IP w/ HashiCorp Consul Terraform Sync
HashiCorp Consul Terraform Sync (CTS) is a tool/daemon that allows you to push updates to your BIG-IP devices in near real-time (this is also referred to as Network Infrastructure Automation).This helps in scenarios where you want to preserve an existing set of network/security policies and deliver updates to application services faster. Consul Terraform Sync Consul is a service registry that keeps track of where a service is (10.1.20.10:80 and 10.1.20.11:80) and the health of the service (responding to HTTP requests).Terraform allows you to push updates to your infrastructure, but usually in a one and done fashion (fire and forget).NIA is a symbiotic relationship of Terraform and Consul.It allows you to track changes via Consul (new node added/removed from a service) and push the change to your infrastructure via Terraform. Putting CTS in Action We can use CTS to help solve a common problem of how to enable a network/security team to allow an application team to dynamically update the pool members for their application.This will be accomplished by defining a virtual server on the BIG-IP and then enabling the application team to update the state of the pool members (but not allow them to modify the virtual server itself). Defining the Virtual Server The first step is that we want to define what services we want.In this example we use a FAST template to generate an AS3 declaration that will generate a set of Event-Driven Service Discovery pools.The Event-Driven pools will be updated by NIA and we will apply an iControl REST RBAC policy to restrict updates. The FAST template takes the inputs of “tenant”, “virtual server IP”, and “services”. This generates a Virtual Server with 3 pools. Event-Driven Service Discovery Each of the pools is created using Event-Driven Service Discovery that creates a new API endpoint with a path of: /mgmt/shared/service-discovery/task/ ~[tenant]~EventDrivenApps~[service]_pool/nodes You can send a POST API call these to add/remove pool members (it handles creation/deletion of nodes).The format of the API call is an array of node objects: [{“id”:”[identifier]”,”ip”:”[ip address]”,”port”:[port (optional)]}] We can use iControl REST RBAC to limit access to a user to only allow updates via the Event-Driven API. Creating a CTS Task NIA can make use of existing Terraform providers including the F5 BIG-IP Provider.We create our own module that makes use of the Event-Driven API ... resource "bigip_event_service_discovery" "pools" { for_each = local.service_ids taskid = "~EventDriven~EventDrivenApps~${each.key}_pool" dynamic "node" { for_each = local.groups[each.key] content { id = node.value.node ip = node.value.node_address port = node.value.port } } } ... Once NIA is run we can see it updating the BIG-IP - Finding f5networks/bigip versions matching "~> 1.5.0"... ... module.AS3.bigip_event_service_discovery.pools["app003"]: Creating... … module.AS3.bigip_event_service_discovery.pools["app002"]: Creation complete after 0s [id=~EventDriven~EventDrivenApps~app002_pool] Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Scaling up the environment to go from 3 pool members to 10 you can see NIA pick-up the changes and apply them to the BIG-IP in near real-time. module.AS3.bigip_event_service_discovery.pools["app001"]: Refreshing state... [id=~EventDriven~EventDrivenApps~app001_pool] … module.AS3.bigip_event_service_discovery.pools["app002"]: Modifying... [id=~EventDriven~EventDrivenApps~app002_pool] … module.AS3.bigip_event_service_discovery.pools["app002"]: Modifications complete after 0s [id=~EventDriven~EventDrivenApps~app002_pool] Apply complete! Resources: 0 added, 3 changed, 0 destroyed. NIA can be run interactively at the command-line, but you can also run it as a system service (i.e. under systemd). Alternate Method In the previous example you saw an example of using AS3 to define the Virtual Server resource.You can also opt to use Event-Driven API directly on an existing BIG-IP pool (just be warned that it will obliterate any existing pool members once you send an update via the Event-Driven nodes API).To create a new Event-Driven pool you would send a POST call with the following payload to /mgmt/shared/service-discovery/task { "id": "test_pool", "schemaVersion": "1.0.0", "provider": "event", "resources": [ { "type": "pool", "path": "/Common/test_pool", "options": { "servicePort": 8080 } } ], "nodePrefix": "/Common/" } You would then be able to access it with the id of “test_pool”.To remove it from Event-Driven Service Discovery you would send a DELETE call to /mgmt/shared/service-discovery/task/test_pool Separation of Concerns In this example you saw how CTS could be used to separate network, security, and application tasks, but these could be easily combined using NIA just as easily.Consul Terraform Sync is now generally available, and I look forward to seeing how you can leverage it.For an example that is similar to this article you can take a look at the following GitHub repo that has an example of using NIA.You can also view another example on the Terraform registry as well.1.6KViews1like8CommentsF5 BIG-IP as a Terminating Gateway for HashiCorp Consul
Our joint customers have asked for it, so the HashiCorp Consul team and the F5 BIG-IP teams have been working to provide this early look at the technology and configurations needed to have BIG-IP perform the Terminating Gateway functionality for HashiCorp Consul. A Bit of an Introduction HashiCorp Consul is a platform that is multi cloud and is used to secure service to service communication. You have all heard about microservices and how within an environment like Consul there is a need to secure and control microservice to microservice communications. But what happens when you want a similar level of security and control when a microservice inside the environment needs to communicate with a service outside of the environment. Enter F5 BIG-IP - the Enterprise Terminating Gateway solution for HashiCorp. HashiCorp has announced the GA of their Terminating Gateway functionality, and we here at F5 want to show our support for this milestone by showing the progress we have made to date in answering the requests of our joint customers. One of the requirements of a Terminating Gateway is that it must respect the security policies defined within Consul. HashiCorp calls these policies Intentions. What Should You Get Out of this Article This article is focused on how BIG-IP when acting as a Terminating Gateway can understand those Intentions and can securely apply those policies/Intentions to either allow or disallow microservice to microservice communications. Update We have been hard at work on this solution, and have created a method to automate the manual processes that I had detailed below. You can skip executing the steps below and jump directly to the new DevCentral Git repository for this solution. Feel free to read the below to get an understanding of the workflows we have automated using the code in the new DevCentral repo. And you can also check out this webinar to hear more about the solution and to see the developer, Shaun Empie, demo the automation of the solution. First Steps Before we dive into the iRulesLX that makes this possible, one must configure the BIG-IP Virtual server to secure the connectivity with mTLS, and configure the pool, profiles, and other configuration options necessary for one's environment. Many here on DevCentral have shown how F5 can perform mTLS with various solutions. Eric Chen has shown how to configure theBIG-IP to use mTLS with Slack.What I want to focus on is how to use an iRuleLX to extract the info necessary to respect the HashiCorp Intentions and allow or disallow a connection based on the HashiCorp Consul Intention. I have to give credit where credit is due. Sanjay Shitole is the one behind the scenes here at F5 along with Dan Callao and Blake Covarrubias from HashiCorp who have worked through the various API touch points, designed the workflow, and F5 Specific iRules and iRulesLX needed to make this function. Now for the Fun Part Once you get your Virtual Server and Pool created the way you would like them with the mTLS certificates etc., you can focus on creating the iLX Workspace where you will write the node.js code and iRules. You can follow the instructions here to create the iLX workspace, add an extension, and an LX plugin. Below is the tcl-based iRule that you will have to add to this workspace. To do this go to Local Traffic > iRules > LX Workspaces and find the workspace you had created in the steps above. In our example, we used "ConsulWorkSpace". Paste the text of the rule listed below into the text editor and click save file. There is one variable (sb_debug) you can change in this file depending on the level of logging you want done to the /var/log/ltm logs. The rest of the iRule Grabs the full SNI value from the handshake. This will be parsed later on in the node.js code to populate one of the variables needed for checking the intention of this connection in Consul. The next section grabs the certificate and stores it as a variable so we can later extract the serial_id and the spiffe, which are the other two variables needed to check the Consul Intention. The next step in the iRule is to pass these three variables via an RPC_HANDLE function to the Node.js code we will discuss below. The last section uses that same RPC_HANDLE to get responses back from the node code and either allows or disallows the connection based on the value of the Consul Intention. when RULE_INIT { #set static::sb_debug to 2 if you want to enable logging to troubleshoot this iRule, 1 for informational messages, otherwise set to 0 set static::sb_debug 0 if {$static::sb_debug > 1} { log local0. "rule init" } } when CLIENTSSL_HANDSHAKE { if { [SSL::extensions exists -type 0] } { binary scan [SSL::extensions -type 0] {@9A*} sni_name if {$static::sb_debug > 1} { log local0. "sni name: ${sni_name}"} } # use the ternary operator to return the servername conditionally if {$static::sb_debug > 1} { log local0. "sni name: [expr {[info exists sni_name] ? ${sni_name} : {not found} }]"} } when CLIENTSSL_CLIENTCERT { if {$static::sb_debug > 1} {log local0. "In CLIENTSSL_CLIENTCERT"} set client_cert [SSL::cert 0] } when HTTP_REQUEST { set serial_id "" set spiffe "" set log_prefix "[IP::remote_addr]:[TCP::remote_port clientside] [IP::local_addr]:[TCP::local_port clientside]" if { [SSL::cert count] > 0 } { HTTP::header insert "X-ENV-SSL_CLIENT_CERTIFICATE" [X509::whole [SSL::cert 0]] set spiffe [findstr [X509::extensions [SSL::cert 0]] "Subject Alternative Name" 39 ","] if {$static::sb_debug > 1} { log local0. "<$log_prefix>: SAN: $spiffe"} set serial_id [X509::serial_number $client_cert] if {$static::sb_debug > 1} { log local0. "<$log_prefix>: Serial_ID: $serial_id"} } if {$static::sb_debug > 1} { log local0.info "here is spiffe:$spiffe" } set RPC_HANDLE [ILX::init "SidebandPlugin" "SidebandExt"] if {[catch {ILX::call $RPC_HANDLE "func" $sni_name $spiffe $serial_id} result]} { if {$static::sb_debug > 1} { log local0.error"Client - [IP::client_addr], ILX failure: $result"} HTTP::respond 500 content "Internal server error: Backend server did not respond." return } ## return proxy result if { $result eq 1 }{ if {$static::sb_debug > 1} {log local0. "Is the connection authorized: $result"} } else { if {$static::sb_debug > 1} {log local0. "Connection is not authorized: $result"} HTTP::respond 400 content '{"status":"Not_Authorized"}'"Content-Type" "application/json" } } Next is to copy the text of the node.js code below and paste it into the index.js file using the GUI. Here though there are two lines you will have to edit that are unique to your environment. Those two lines are the hostname and the port in the "const options =" section. These values will be the IP and port on which your Consul Server is listening for API calls. This node.js takes the three values the tcl-based iRule passed to it, does some regex magic on the sni_name value to get the target variable that is used to check the Consul Intention. It does this by crafting an API call to the consul server API endpoint that includes the Target, the ClientCertURI, and the ClientCertSerial values. The Consul Server responds back, and the node.js code captures that response, and passes a value back to the tcl-based iRule, which means the communication is disallowed or allowed. const http = require("http"); const f5 = require("f5-nodejs"); // Initialize ILX Server var ilx = new f5.ILXServer(); ilx.addMethod('func', function(req, res) { var retstr = ""; var sni_name = req.params()[0]; var spiffe = req.params()[1]; var serial_id = req.params()[2]; const regex = /[^.]*/; let targetarr = sni_name.match(regex); target = targetarr.toString(); console.log('My Spiffe ID is: ', spiffe); console.log('My Serial ID is: ', serial_id); //Construct request payload var data = JSON.stringify({ "Target": target, "ClientCertURI": spiffe, "ClientCertSerial": serial_id }); //Strip off newline character(s) data = data.replace(/\\n/g, '') ; // Construct connection settings const options = { hostname: '10.0.0.100', port: 8500, path: '/v1/agent/connect/authorize', method: 'POST', headers: { 'Content-Type': 'application/json', 'Content-Length': data.length } }; // Construct Consul sideband HTTP Call const myreq = http.request(options, res2 => { console.log(`Posting Json to Consul -------> statusCode: ${res2.statusCode}`); res2.on('data', d => { //capture response payload process.stdout.write(d); retstr += d; }); res2.on('end', d => { //Check response for Valid Authorizaion and return back to TCL iRule var isVal = retstr.includes(":true"); res.reply(isVal); }); }); myreq.on('error', error => { console.error(error); }); // Intiate Consul Call myreq.write(data); myreq.end(); }); // Start ILX listener ilx.listen(); This iRulesLX solution will allow for multiple sources to connect to the BIG-IP Virtual Server, exchange mTLS info, but only allow the connection once the Consul Intentions are verified. If your Intentions looked something similar to the ones below the Client microservice would be allowed to communicate with the services behind the BIG-IP, whereas the socialapp microservice would be blocked at the BIG-IP since we are capturing and respecting the Consul Intentions. So now that we have shown how BIG-IP acts as terminating Gateway for HashiCorp Consul - all the while respecting the Consul Intentions, What’s next? Well next is for F5 and HashiCorp to continue working together on this solution. We intend to take this a level further by creating a prototype that automates the process. The Automated prototype will have a mechanism to listen to changes within the Consul API server, and when a new service is defined behind the BIG-IP acting as the terminating Gateway, the Virtual server, the pools, the ssl profiles, and the iRulesLX workspace can be automatically configured via an AS3 declaration. What Can You Do Next You can find all of the iRules used in this solution in our DevCentral Github repo. Please reach out to me and the F5 HashiCorp Business Development team hereif you have any questions, feature requests, or any feedback to make this solution better.1KViews3likes0CommentsConsul Templating BIG-IP Services
Introduction HashiCorp tools can provide service discovery, monitoring, configuration templates, and secrets for F5 BIG-IP services.We’ll look at an example of using these tools to deploy BIG-IP services to provide high-availability, global server load balancing, and security.In this example we will Register Services: Add Pool Member #1 and #2 to Consul Define Services: Store AS3 Template in Consul (define BIG-IP LTM, DNS, ASM services) Extract Secrets: Use Consul Template to extract Certificate passphrase from Vault Generate Config: Combine Pool Member, AS3 Template, and Secret Deploy Services: Consul Template will push AS3 declaration (configurations) to BIG-IP devices HashiCorp Tools The HashiCorp tools that we will leverage are: Consul: Service Discovery, Monitoring, Key/Value Store Vault: Secrets Consul Template: Templating Consul Consul will provide an inventory of services (pool members) that the BIG-IP will be using.We will also store configuration information like IP addresses, SSL certificates/keys, and use Consul to monitor the health of the services. Vault Vault is a “secret” store.It is responsible for keeping sensitive data, like passwords, in encrypted formats.In this example we will store the certificate passphrase, used to encrypt the SSL certificate, in vault for safe keeping. Consul Template Consul Template is a tool that can read data from both Consul and Vault to generate configuration files.This is useful for generating F5 Application Services 3 (AS3) declarations that define the BIG-IP LTM, DNS, and ASM services that we would like to deploy. Building Out the Demo Environment 1. Register Services First we need to populate the services that we will use.Here we have deployed two Consul agents and each agent is responsible for registering and monitoring a NGINX service. 2. Define Services We also want to use the Consul key/value store to store additional information about IP addresses, SSL certificates, and custom AS3 declaration. The AS3 declaration defines the services that we want (LTM and ASM), but omits details like pool members and certificate passphrases that we will populate later. 3. Extract Secrets In the previous example we did not store the passphrase for the SSL certificate key in Consul.Without the passphrase the certificate is securely encrypted. We use Vault to store the passphrase and Consul Template to extract the secret value. This results in the secret value being encoded in the JSON declaration (base64) that will be sent to the BIG-IP. 4. Generate Config To populate the pool members, we use a Consul template that looks like the following And outputs the following AS3 declaration Observe that Consul Template has pulled the pool member IPs from Consul service registry.We are also updating the state of the pool member based on the health reported by Consul. We make use of a separate template for BIG-IP DNS services (GSLB). 5. Deploying Services Once Consul Template has generated the AS3 declaration we send these declarations to the BIG-IP to generate the services. Browsing the BIG-IP devices you can see the services deployed. LTM Services Pools are populated by pool members that are registered in Consul. Status is updated if you disable the service on one of the nodes. Template updates pool member to “disabled” state. ASM Services ASM Policy and Logging Profile. DNS Services Wide IP deployed. AS3 Declaration Fetching the AS3 declaration from the BIG-IP you can see that the passphrase is encrypted using the SecureVault feature of BIG-IP and is no longer in a reversible format. Templating from 1 to 2 is Easy In this example we deployed to two applications and two BIG-IP devices.Using these same templates we can easily expand this to four applications and 4 BIG-IP devices by updating our deployment. The following shows where we have added a second Data Center to Consul and configured a second BIG-IP device to respond to requests for app001. In review we leveraged a basket of HashiCorp tools to deploy BIG-IP services.If you liked this article please be sure to also look at “Increase CI/CD velocity through automation integration”.{{ end }}1.5KViews3likes0CommentsZero Touch App Delivery with F5 BIG-IP, Terraform and Consul - Webinar Q&A
The following post contains answers to questions asked during our webinar about Zero Touch Application Delivery with F5 BIG-IP, Terraform and Consul Q: What version of BIG-IP does the terraform provider work with? The version requirements can be found here: https://github.com/F5Networks/terraform-provider-bigip#requirements Q: Is there a plan for bootstrap and on-board F5 instances on Azure (and other cloud platforms) by using Terraform only i.e. without using Arm templates or any other toll in between? Similar to deploying and further managing Azure resources fully automated in Terraform. The provisioning in Azure can be done using the Azure resources also. The F5 BIG-IP provider can be used to manage configure and manage BIG-IP in all private and public clouds once it is provisioned. Here is an example: https://github.com/garyluf5/f5_terraform/blob/master/HA_via_lb_DO_AS3/main.tf Q: Were the pool members added dynamically by querying consul? But who made the change on the f5 device? Consul expose service catalog API for AS3. With this integration, changes are made on the F5 device automatically, as we have service discovery for consul feature configured on BIG-IP, this will poll the consul server for regular intervals, once the changes are noticed on consul server, the node worker on BIG-IP will pull the node information from the Consul server. Q: Are you saying that only BIG-IP 14.1 is supported with the Terraform AS3 provider? Isn't 12.1 supported in newer versions of AS3? TMOSv12.1 and above are supported on BIG-IP. Q: Does the repo mentioned in the webinar contain all the steps to build out the same demo? Yes - https://github.com/hashicorp/f5-terraform-consul-sd-webinar contains all the steps to build out the same demo Q: How does Consul support high availability? Consul is a highly available system. We use some gossip and consensus protocols under the hood to communicate with groups of massive number of nodes. It's not uncommon to see Consul data centers powering application servers in the thousands or more. Read the Consul architecture design document for more details. https://www.consul.io/docs/internals/architecture.html. Customers are recommended to deploy a three to five nodes which is a good balance balance of availability and performance. And if you have very large clusters pushing a lot of volume, you can add enterprise Consul feature,to enable non-voting members to horizontally scale out that Consul cluster for better performance. Read the deployment guide for more information. https://learn.hashicorp.com/consul/datacenter-deploy/deployment-guide Q: Does AS3 actually automatically added the nodes based on the response from consul? can we use the same worker but with a different discovery mechanism? At this point, only AS3 based JSON polling mechanism is supported as discovery mechanism. For other features or ideas, open an issue at https://github.com/F5Networks/f5-appsvcs-extension/issues Q: Does this solution support for other F5 modules like ASM, AFM etc.? Service discovery applies to the context of LTM. Through this service discovery solution, pool-members (nodes) are being changed on the BIG-IP. If the node uses a certain app-service configuration with ASM and AFM configured, the same will still apply to the newly nodes discovered. Q: Does Consul work on other cloud providers? Yes. Consul is a single binary that's compiled and go for multiple platforms, and can run in your own data centers, bare metal or VMs. They can run in container schedulers such as Kubernetes with a helm chart that help you automatically deploy it into that environment. And it's cloud-agnostic as well. It provide cloud auto-joining capabilities for nodes on AWS, Azure and GCP. Q: Are we able to tear down the resources that are built using AS3? Yes - Since AS3 uses declarative api, clean up can be done by just removing the stuff that is not needed and re-send declaration. E.g. Tenant1: {} - would delete the whole tenant 1. Q: How does Consul health check work? How fast the resource status gets updated if it is down Consul provides many types of health checks, including TCP, HTTP, L7, or custom scripts. The one that we showed in the demo was a simple TCP check, just for localhost 80 for NGINX server with an interval of 10 seconds. But these are configurable at the agent level and can be optimized for your workloads. Consul’s health check is very good at converging in the way that all the nodes can figure out that there's failures and communicate that. And generally even in a 10,000-node cluster, you're looking at convergence of under a second, and in general converge much faster. Read the guide for more information https://learn.hashicorp.com/consul/getting-started/services Q: Is the data synchronized automatically after pushing the script or we have to manually do it at the device level? Once we have the Service discovery configured on BIG-IP through the terraform provider as3 resource the data is synchronized automatically. Q: Can you show what the monitor configuration looks like on BIG-IP? Yes. (see video ). It is a simple out of the box HTTP monitor Q: F5 - Will the same solution also scale down in F5? I mean if we reduce the nginx instance, with AS3 reduce the nodes back to the reduced number? Yes that is correct it will scale down automatically once the node registered to the consul server also is scaled down.674Views0likes0Comments