series-f5-and-hashicorp-essentials
10 TopicsUsing Terraform and F5® Distributed Cloud Mesh to establish secure connectivity between clouds
It is not uncommon for companies to have applications deployed independently in AWS, Azure and GCP. When these applications are required to communicate with each other, these companies must deal with operational overhead and new set of challenges such as skills gap, patching security vulnerabilities and outages, leading to bad customer experience. Setting up individual centers of excellence for managing each cloud is not the answer as it leads to siloed management and often proves costly. This is where F5 Distributed Cloud Mesh can help. UsingF5® Distributed Cloud Mesh, you can establish secure connectivity with minimal changes to existing application deployments. You can do so without any outages or extended maintenance windows. In this blog we will go over a multi-cloud scenario in which we will establish secure connectivity between applications running in AWS and Azure. To show this, we will follow these steps Deploy simple application web servers and VPC, VNETS in AWS and Azure respectively using Terraform. Create virtualF5® Distributed Cloud sites for AWS and Azure using Terraform provider for Distributed Cloud platform. These virtual sites provide abstraction for AWS VPCs and AZURE VNETs which can then be managed and used in aggregate. Use terraform provider to configureF5® Distributed Cloud Mesh ingress and egress gateways to provide connectivity to the Distributed Cloud backbone. Configure services such as security policies, DNS, HTTP Load balancer andF5® Distributed Cloud WAAP which are required to establish secure connectivity between applications. Terraform provider for F5® Distributed Cloud F5® Distributed Cloud terraform provider can be used to configure Distributed Cloud Mesh Objects and these objects represent desired state of the system. The desired state of the system could be configuring a http/tcp load balancer, vk8s cluster, service mesh, enabling api security etc. Terraform F5® Distributed Cloud provider has more than 100 resources and data resources. Some of the resources which will be using in this example are for Distributed Cloud Services like Cloud HTTP load balancer, F5® Distributed Cloud WAAP and F5® Distributed Cloud Sites creation in AWS and Azure. You can find a list of resources here. Here are the steps to deploy simple application usingF5® Distributed Cloud terraform provider on AWS & Azure. I am using below repository to create the configuration. You can also refer to the READMEon F5's DevCentral Git. git clone https://github.com/f5devcentral/f5-digital-customer-engagement-center.git cd f5-digital-customer-engagement-center/ git checkout mcn # checkout to multi cloud branch cd solutions/volterra/multi-cloud-connectivity/ # change dir for multi cloud scripts customize admin.auto.tfvars.example as per your needs cp admin.auto.tfvars.example admin.auto.tfvars ./setup.sh # Run setup.sh file to deploy the Volterra sites which identifies services in AWS, Azure etc. ./aws-setup.sh # Run aws-setup.sh file to deploy the application and infrastructure in AWS ./azure-setup.sh # Run azure-setup.sh file to deploy the application and infrastructure in Azure This will create the following objects on AWS and Azure 3 VPC and VNET networks on each cloud respectively 3F5® Distributed Cloud Mesh nodes on each cloud seen as master-0 3 backend application on each cloud seen as scsmcn-workstation here projectPrefix is scsmcn in admin.auto.tfvars file 1 jump box on each cloud to test Create 6 http load balancers one for each node and can be accessed through F5® Distributed Cloud Console Create 6F5® Distributed Cloud sites which can be accessed via F5® Distributed Cloud Console F5® Distributed Cloud Mesh does all the stitching of the VPCs and VNETs for you, you don’t need to create any transit gateway, also it stitches VPCs & VNETs to the F5® Distributed Cloud Application Delivery Network. Client when accessing backend application will use the nearestF5® Distributed Cloud Regional network http load balancer to minimize the latency using Anycast. Run setup.sh script to deploy theF5® Distributed Cloud sites this will create a virtual sites that will identify services deployed in AWS and Azure. ./setup.sh Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of volterraedge/volterra from the dependency lock file - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed volterraedge/volterra v0.10.0 Terraform has been successfully initialized! random_id.buildSuffix: Creating... random_id.buildSuffix: Creation complete after 0s [id=c9o] volterra_virtual_site.site: Creating... volterra_virtual_site.site: Creation complete after 2s [id=3bde7bd5-3e0a-4fd5-b280-7434ee234117] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: buildSuffix = "73da" volterraVirtualSite = "scsmcn-site-73da" created random build suffix and virtual site aws-setup.sh file to deploy the vpc, webservers and jump host, http load balancer,F5® Distributed Cloud aws site and origin servers ./aws-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/aws from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/null from the dependency lock file - Reusing previous version of hashicorp/template from the dependency lock file - Using previously-installed hashicorp/null v3.1.0 - Using previously-installed hashicorp/template v2.2.0 - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/aws v3.60.0 - Using previously-installed hashicorp/random v3.1.0 Terraform has been successfully initialized! An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.aws_instances.volterra["bu1"] will be read during apply # (config refers to values not yet known) <= data "aws_instances" "volterra" { + id = (known after apply) + ids = (known after apply) ..... truncated output .... volterra_app_firewall.waf: Creating... module.vpc["bu2"].aws_vpc.this[0]: Creating... aws_key_pair.deployer: Creating... module.vpc["bu3"].aws_vpc.this[0]: Creating... module.vpc["bu1"].aws_vpc.this[0]: Creating... aws_route53_resolver_rule_association.bu["bu3"]: Creation complete after 1m18s [id=rslvr-rrassoc-d4051e3a5df442f29] Apply complete! Resources: 90 added, 0 changed, 0 destroyed. Outputs: bu1JumphostPublicIp = "54.213.205.230" vpcId = "{\"bu1\":\"vpc-051565f673ef5ec0d\",\"bu2\":\"vpc-0c4ad2be8f91990cf\",\"bu3\":\"vpc-0552e9a05bea8013e\"}" azure-setup.sh will execute terraform scripts to deploy webservers, vnet, http load balancer , origin servers andF5® Distributed Cloud azure site. ./azure-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/azurerm from the dependency lock file - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed hashicorp/azurerm v2.78.0 Terraform has been successfully initialized! ..... truncated output .... azurerm_private_dns_a_record.inside["bu11"]: Creation complete after 2s [id=/subscriptions/187fa2f3-5d57-4e6a-9b1b-f92ba7adbf42/resourceGroups/scsmcn-rg-bu11-73da/providers/Microsoft.Network/privateDnsZones/shared.acme.com/A/inside] Apply complete! Resources: 58 added, 2 changed, 12 destroyed. Outputs: azureJumphostPublicIps = [ "20.190.21.3", ] After running terraform script you can sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on System on the left and then Site List to list the sites, you can also enter into search string to search a particular site, Below you can find list of virtual sites deployed for Azure and AWS, status of these sites can be seen using F5® Distributed Cloud Console. AWS Sites on F5® Distributed Cloud Console Now in order to see the connectivity of sites to the Regional Edges, click System --> Site Map --> Click on the appropriate site you want to focus and then Connectivity --> Click AWS, Below you can see the AWS virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on AWS. Provides system and application level metrics. Azure Sites on F5® Distributed Cloud Console Below you can see the Azure virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on Azure. Provides system and application level metrics. Analytics on F5® Distributed Cloud Console To check the status of the application, sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on the application tab --> HTTP load balancer --> select appropriate load balancer --> click Request. Below you can see various matrices for applications deployed into the AWS and Azure cloud, you can see latency at different levels like client to lb, lb to server and server to application. Also it provides HTTP requests with Error codes on application access. API First F5® Distributed Cloud Console F5® Distributed Cloud Console helps many operational tasks like visibility into request types, JSON payload of the request indicating browser type, device type, tenant and also request came on which http load balancers and many more details. Benefits OpEx Reduction: Single simplified stack can be used to manage apps in different clouds. For example, the burden of configuring security policies at different locations is avoided. Also transit cost associated with public cloud can be eliminated. Reduce Operational Complexity: Network expert is not required as F5® Distributed Cloud Console provides simplified way to configure and manage network and resources both at customer edge location and public cloud. Your NetOps or DevOps person can easily deploy the infrastructure or applications without network expertise. Adoption of a new cloud provider is accelerated. App User Experience: Customers don’t have to learn different visibility tools, F5® Distributed Cloud Console provided end to end visibility of applications which results in better user experience. Origin server or LB can be moved closer to the customer which reduces latency for apps and APIs which results in better experience.1.4KViews2likes3CommentsManage F5 BIG-IP Advanced WAF Policies with Terraform (Intro)
This arcticle series is about the management of F5 BIG-IP Advanced WAF policies using Terraform. Its is leveraging the F5 BIG-IP Advanced WAF Declarative API and covers most of the production workflows. This is also a great way to cover multi-cloud deployments with a unified automation stack of F5 BIG-IP Advanced WAF Policies and a consistent security protection across all Web and API apps regardless of their location.3.9KViews4likes0CommentsAutomating certificate lifecycle management with HashiCorp Vault
One of the challenges many enterprises face today, is keeping track of various certificates and ensuring those which are associated with critical applications deployed across multi-cloud are current and valid.This integration helps you to improve your security poster with short lived dynamic SSL certificates using HashiCorp Vault and AS3 on BIG-IP. First, a bit about AS3… Application Services 3 Extension (referred to asAS3 Extensionor more often simplyAS3) is a flexible, low-overhead mechanism for managing application-specific configurations on a BIG-IP system. AS3 uses a declarative model, meaning you provide a JSON declaration rather than a set of imperative commands. The declaration represents the configuration which AS3 is responsible for creating on a BIG-IP system. AS3 is well-defined according to the rules of JSON Schema, and declarations validate according to JSON Schema. AS3 accepts declaration updates via REST (push), reference (pull), or CLI (flat file editing). What is Vault? Vault is a tool for securely accessingsecrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in. Public Key Infrastructure (PKI) provides a way to verify authenticity and guarantee secure communication between applications. Setting up your own PKI infrastructure can be a complex and very manual process. Vault PKI allows users to dynamically generate X.509 certificates quickly and on demand. Vault PKI can streamline distributing TLS certificates and allows users to create PKI certificates with a single command. Vault PKI reduces overhead around the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete, while additionally providing an authentication and authorization mechanism to validate as well. Benefits of using Vault automation for BIG-IP Cloud and platform independent solution for your application anywhere (public cloud or private cloud) Uses vault agent and Leverages AS3 Templating to update expiring certificates No application downtime - Dynamically update configuration without affecting traffic Configuration: 1.Setting up the environment - deploy instances of BIG-IP VE and Vault in cloud or on-premises You can create instances in the cloud for Vault & BIG-IP using terraform. The repo https://github.com/f5devcentral/f5-certificate-rotate This will pretty much download Vault binary and start the Vault server. Also, it will deploy the F5 BIG-IP instance on the AWS Cloud. Once we have the instances ready, you can SSH into the Vault ubuntu server and change directory to /tmp and execute below commands. # Point to the Vault Server export VAULT_ADDR=http://127.0.0.1:8200 # Export the Vault Token export VAULT_TOKEN=root # Create roles and define allowed domains with TTL for Certificate vault write pki/roles/web-certs allowed_domains=demof5.com ttl=160s max_ttl=30m allow_subdomains=true # Enable the app role vault auth enable approle # Create a app policy and apply https://github.com/f5devcentral/f5-certificate-rotate/blob/master/templates/app-pol.hcl vault policy write app-pol app-pol.hcl # Apply the app policy using app role vault write auth/approle/role/web-certs policies="app-pol" # Read the Role id from the Vault vault read -format=json auth/approle/role/web-certs/role-id | jq -r '.data.role_id' > roleID # Using the role id use the secret id to authenticate vault server vault write -f -format=json auth/approle/role/web-certs/secret-id | jq -r '.data.secret_id' > secretID # Finally run the Vault agent using the config file vault agent -config=agent-config.hcl -log-level=debug 2.UseAS3 Template file certs.tmpl with the values as shown The template file shown below will be automatically uploaded to the Vault instance, the ubuntu server in the /tmp directory Here I am using an AS3 file called certs.tmpl which is templatized as shown below. {{ with secret "pki/issue/web-certs" "common_name=www.demof5.com" }} [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on {{ timestamp }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "{{ .Data.certificate | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/privateKey", "value": "{{ .Data.private_key | toJSON | replaceAll "\"" "" }}" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/chainCA", "value": "{{ .Data.issuing_ca | toJSON | replaceAll "\"" "" }}" } ] {{ end }} 3.Vault will render a new JSON payload file called certs.json whenever the SSL Certs expires When the Certificate expires, Vault generates a new Certificate which we can use to update the BIG-IP using ssh script, below shows the certs.json created automatically. Snippet of certs.json being created [ { "op": "replace", "path": "/Demof5/HTTPS/webcert/remark", "value": "Updated on 2020-10-02T19:05:53Z" }, { "op": "replace", "path": "/Demof5/HTTPS/webcert/certificate", "value": "-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIUaMgYXdERwzwU+tnFsSFld3DYrkEwDQYJKoZIhvcNAQEL\nBQAwEzERMA8GA1UEAxMIZGVtby5jb20wHhcNMjAxMDAyMTkwNTIzWhcNMj 4.Use Vault Agent file to run the integration forever without application traffic getting affected Example Vault Agent file pid_file = "./pidfile" vault { address = "http://127.0.0.1:8200" } auto_auth { method "approle" { mount_path = "auth/approle" config = { role_id_file_path = "roleID" secret_id_file_path = "secretID" remove_secret_id_file_after_reading = false } } sink "file" { config = { path = "approleToken" } } } template { source = "./certs.tmpl" destination = "./certs.json" #command = "bash updt.sh" } template { source = "./https.tmpl" destination = "./https.json" } 5. For Integration with HCP Vault If you are using HashiCorp hosted Vault solution instead of standalone Vault you can still use this solution with making few changes in the vault agent file. Detail documentation when using HCP vault is here atREADME.You can map tenant application objects on BIG-IP to Namespace on HCP Vault which provides islotation. More details how to create this solution athttps://github.com/f5businessdevelopment/f5-hcp-vault Summary The integration has following components listed below, here the Venafi or Lets Encrypt can also be used as external CA. Using this solution, you are able to: Improve your security posture with short lived dynamic certificates Automatically update applications using templating and robust AS3 service Increased collaborating breaking down silos Cloud agnostic solution can be deployed on-prem or public cloud3.9KViews4likes0CommentsPushing Updates to BIG-IP w/ HashiCorp Consul Terraform Sync
HashiCorp Consul Terraform Sync (CTS) is a tool/daemon that allows you to push updates to your BIG-IP devices in near real-time (this is also referred to as Network Infrastructure Automation).This helps in scenarios where you want to preserve an existing set of network/security policies and deliver updates to application services faster. Consul Terraform Sync Consul is a service registry that keeps track of where a service is (10.1.20.10:80 and 10.1.20.11:80) and the health of the service (responding to HTTP requests).Terraform allows you to push updates to your infrastructure, but usually in a one and done fashion (fire and forget).NIA is a symbiotic relationship of Terraform and Consul.It allows you to track changes via Consul (new node added/removed from a service) and push the change to your infrastructure via Terraform. Putting CTS in Action We can use CTS to help solve a common problem of how to enable a network/security team to allow an application team to dynamically update the pool members for their application.This will be accomplished by defining a virtual server on the BIG-IP and then enabling the application team to update the state of the pool members (but not allow them to modify the virtual server itself). Defining the Virtual Server The first step is that we want to define what services we want.In this example we use a FAST template to generate an AS3 declaration that will generate a set of Event-Driven Service Discovery pools.The Event-Driven pools will be updated by NIA and we will apply an iControl REST RBAC policy to restrict updates. The FAST template takes the inputs of “tenant”, “virtual server IP”, and “services”. This generates a Virtual Server with 3 pools. Event-Driven Service Discovery Each of the pools is created using Event-Driven Service Discovery that creates a new API endpoint with a path of: /mgmt/shared/service-discovery/task/ ~[tenant]~EventDrivenApps~[service]_pool/nodes You can send a POST API call these to add/remove pool members (it handles creation/deletion of nodes).The format of the API call is an array of node objects: [{“id”:”[identifier]”,”ip”:”[ip address]”,”port”:[port (optional)]}] We can use iControl REST RBAC to limit access to a user to only allow updates via the Event-Driven API. Creating a CTS Task NIA can make use of existing Terraform providers including the F5 BIG-IP Provider.We create our own module that makes use of the Event-Driven API ... resource "bigip_event_service_discovery" "pools" { for_each = local.service_ids taskid = "~EventDriven~EventDrivenApps~${each.key}_pool" dynamic "node" { for_each = local.groups[each.key] content { id = node.value.node ip = node.value.node_address port = node.value.port } } } ... Once NIA is run we can see it updating the BIG-IP - Finding f5networks/bigip versions matching "~> 1.5.0"... ... module.AS3.bigip_event_service_discovery.pools["app003"]: Creating... … module.AS3.bigip_event_service_discovery.pools["app002"]: Creation complete after 0s [id=~EventDriven~EventDrivenApps~app002_pool] Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Scaling up the environment to go from 3 pool members to 10 you can see NIA pick-up the changes and apply them to the BIG-IP in near real-time. module.AS3.bigip_event_service_discovery.pools["app001"]: Refreshing state... [id=~EventDriven~EventDrivenApps~app001_pool] … module.AS3.bigip_event_service_discovery.pools["app002"]: Modifying... [id=~EventDriven~EventDrivenApps~app002_pool] … module.AS3.bigip_event_service_discovery.pools["app002"]: Modifications complete after 0s [id=~EventDriven~EventDrivenApps~app002_pool] Apply complete! Resources: 0 added, 3 changed, 0 destroyed. NIA can be run interactively at the command-line, but you can also run it as a system service (i.e. under systemd). Alternate Method In the previous example you saw an example of using AS3 to define the Virtual Server resource.You can also opt to use Event-Driven API directly on an existing BIG-IP pool (just be warned that it will obliterate any existing pool members once you send an update via the Event-Driven nodes API).To create a new Event-Driven pool you would send a POST call with the following payload to /mgmt/shared/service-discovery/task { "id": "test_pool", "schemaVersion": "1.0.0", "provider": "event", "resources": [ { "type": "pool", "path": "/Common/test_pool", "options": { "servicePort": 8080 } } ], "nodePrefix": "/Common/" } You would then be able to access it with the id of “test_pool”.To remove it from Event-Driven Service Discovery you would send a DELETE call to /mgmt/shared/service-discovery/task/test_pool Separation of Concerns In this example you saw how CTS could be used to separate network, security, and application tasks, but these could be easily combined using NIA just as easily.Consul Terraform Sync is now generally available, and I look forward to seeing how you can leverage it.For an example that is similar to this article you can take a look at the following GitHub repo that has an example of using NIA.You can also view another example on the Terraform registry as well.1.6KViews1like8CommentsConsistent Security and performance across clouds using BIG-IP and Terraform
Asreported by IDC, enterprises are increasingly considering deploying applications and services into multiple public clouds.This presents a new set of challenges: configuring virtual compute, storage, network, and middleware services on each of these clouds require cloud specific skills and configuration normalization to ensure same customer experience. In my previous blog, I went through how to configure BIG-IP as a part of your CICD pipeline using Terraform and GitHub Actions. As you might know BIG-IP Virtual Editions can be deployed in all major public cloud using Terraform. In this article, I will cover how to consistently apply configurations across BIG-IP Virtual Editions running in multi-cloud using Terraform Automation. We will use templating for AS3 and will build declarations that you can apply to BIG-IP running in any cloud using Terraform. Environment For this use case example, I will deploy a WAF policy on the BIG-IP devices running in AWS and Azure. BIG-IP VE instances and backend applications instances are created using the terraform scripts foundhere, but the following instructions will work on any cloud deployed BIG-IP VEs & applications. I will use an AS3 declaration which will configure a new tenant on the BIG-IP with a virtual server, a pool, and nodes. I will apply the same declaration with an associated WAF policy to virtual BIG-IP instances in AWS and Azure clouds using Terraform. Prerequisites 1.An AWS Account 2.An Azure Account 3.BIG-IP VE v14+ instance running on AWS and Azure 4.The ASM module is licensed and activated on your BIG-IP 5.Terraform installed on your local or remote jump box Let’s get started… AS3 Declaration Below is an example of an AS3 declaration for creating a virtual server and attaching a WAF security policy to it. At a minimum, you will need to edit the IPs to match your environment. { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.24.0", "id": "Protected_App", "My_Protected_App": { "class": "Tenant", "App": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualPort": 8080, "virtualAddresses": [ "10.2.1.200" ], "pool": "web_pool", "policyWAF": { "use": "My_ASM_Policy" }, "persistenceMethods": [], "profileMultiplex": { "bigip": "/Common/oneconnect" } }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 80, "serverAddresses": [ "10.2.1.101", "10.2.1.102" ] } ] }, "My_ASM_Policy": { "class": "WAF_Policy", "url": "https://raw.githubusercontent.com/scshitole/more-terraform/master/Sample_app_sec_02_waf_policy.xml", "ignoreChanges": true } } } } } Using Terraform to configure the BIG-IP I am using terraform to deploy the AS3 declaration above, which configures the Virtual Server and attaches the ASM WAF policy. The WAF Policy URL is also declared in the AS3 as shown above. The BIG-IP can be provisioned in Azure using the ‘terraform-Azure-bigip-module’ located at https://github.com/f5devcentral/terraform-azure-bigip-module. Once done, I use Terraform configuration below to deploy the secure application configuration using the bigip-as3 resource which is part of BIG-IP Terraform provider. Terraform configuration is located at https://github.com/scshitole/mcloud/blob/main/terraform-azure-bigip-module/examples/as3/main.tf Example Terraform TF for Azure Cloud provider "azurerm" { features {} } terraform { required_providers { bigip = { source = "f5networks/bigip" version = "1.8.0" } } } provider "bigip" { address = var.address username = "admin" password = var.password port = var.port } resource "bigip_as3" "as3-waf" { as3_json = file(var.declaration) } resource "azurerm_network_interface" "appnic" { count = var.app_count name = "app_nic" location = var.location resource_group_name = var.resource_group ip_configuration { name = "testConfiguration" subnet_id = var.subnet_id private_ip_address_allocation = "Static" private_ip_address = "10.2.1.101" } } resource "azurerm_managed_disk" "appdisk" { name = "datadisk_existing_${count.index}" count = var.app_count location = var.location resource_group_name = var.resource_group storage_account_type = "Standard_LRS" create_option = "Empty" disk_size_gb = "1023" } resource "azurerm_availability_set" "avset" { name = "avset" location = var.location resource_group_name = var.resource_group platform_fault_domain_count = 2 platform_update_domain_count = 2 managed = true } resource "azurerm_virtual_machine" "app" { count = var.app_count name = "app_vm_${count.index}" location = var.location availability_set_id = azurerm_availability_set.avset.id resource_group_name = var.resource_group network_interface_ids = [element(azurerm_network_interface.appnic.*.id, count.index)] vm_size = "Standard_DS1_v2" # Uncomment this line to delete the OS disk automatically when deleting the VM delete_os_disk_on_termination = true # Uncomment this line to delete the data disks automatically when deleting the VM delete_data_disks_on_termination = true storage_image_reference { publisher = "Canonical" offer = "UbuntuServer" sku = "16.04-LTS" version = "latest" } storage_os_disk { name = "myosdisk${count.index}" caching = "ReadWrite" create_option = "FromImage" managed_disk_type = "Standard_LRS" } # Optional data disks storage_data_disk { name = "datadisk_new_${count.index}" managed_disk_type = "Standard_LRS" create_option = "Empty" lun = 0 disk_size_gb = "1023" } storage_data_disk { name = element(azurerm_managed_disk.appdisk.*.name, count.index) managed_disk_id = element(azurerm_managed_disk.appdisk.*.id, count.index) create_option = "Attach" lun = 1 disk_size_gb = element(azurerm_managed_disk.appdisk.*.disk_size_gb, count.index) } os_profile { computer_name = format("appserver-%s", count.index) admin_username = "appuser" admin_password = var.upassword } os_profile_linux_config { disable_password_authentication = false } tags = { Name = "${var.prefix}-app" } } Example Terraform TF for AWS Cloud The BIG-IP can be deployed using the terraform module for AWS located at https://github.com/f5devcentral/terraform-aws-bigip, once done I am using the below TF configuration to deploy the application and the AS3 application objects using terraform BIG-IP provider resource bigip-as3 in AWS Cloud, this will also protect the application using the ASM WAF policy defined in the AS3. Terraform TF file configuration is located at https://github.com/scshitole/mcloud/blob/main/terraform-aws-bigip/examples/as3/main.tf terraform { required_providers { bigip = { source = "f5networks/bigip" version = "1.8.0" } } } provider "bigip" { address = var.address username = "admin" password = var.password port= var.port } resource "bigip_as3" "as3-waf" { as3_json = file(var.declaration) } provider "aws" { profile = "default" region = "us-east-2" } data "aws_ami" "ubuntu" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["099720109477"] # Canonical } resource "aws_instance" "example" { ami = data.aws_ami.ubuntu.id instance_type = "t2.micro" private_ip = "10.0.0.100" subnet_id = var.subnet_id tags = { Name = "scs-minstance" } } Conclusion With Terraform automation, BIG-IP and AS3 can facilitate consistent network, security and performance configurations for applications across multiple clouds, both public and private. As shown above, you can create and use the same AS3 declaration (same load balancing and WAF policy) for applications running in AWS and Azure. The AS3 template I showed can be customized for HTTP Services, TLS Encryption, Network Security, Application Security, DOS Protection etc.525Views0likes0CommentsIntegrate BIG-IP policy management into CI/CD pipeline using Terraform and GitHub Actions
AS3 (Application Services 3) is a part of F5 AutomationToolchain which provides a flexible, low overhead mechanism for managingconfigurationson a BIG-IP device.In this blog, I will show you how to integrateyour security and network policy management into your CICD pipelines using AS3, Terraform, GitHub Actions. With Terraform and GitHub Actions, you can enable continuous integration and deployment practices to automate, test and deploy BIG-IP configurations as part of a pipeline. Environment Setup In the restofthis blog,we will go overhowtocreatea sample workflow using GitHub Actions and Terraform. This workflow will be used to deploy an AS3 template which contains WAF (ASM) policy that will protect the backend application from any attacks. Prerequisite ·BIG-IP device with AS3 rpm installed and ASMenabled. ·Backend application instance ·GitHub account ·AWS account ·Local instance of Terraform installed and running (I am using Terraform version 0.14.7) ·Terraform Cloud account Setting up local CLI for Terraform Cloud Sign in into Terraform cloud at https://www.terraform.io/ Click on Create a New workspace, select CLI (Command Line Interface) Driven Workspace and assign your workspace a name – e.g., ‘bigip-ci’ Leave the browser window and go to your terminal to clone the ‘f5devcentral/bigip-ci.git’ repository. git clone https://github.com/f5devcentral/bigip-ci.git cd bigip-ci terraform login When prompted, say ‘YES’ to proceed. This will open a new browser window where you will be asked to Create API token as shown. Give a name for your token and click ‘create API token’. Copy the token you created and paste it into your terminal window. Your terminal will be waiting for input as shown. Token for app.terraform.io: Enter a value:<your token> Set up GitHub environment to access AWS and Terraform Cloud Go to https://github.com/<yourusername>/bigip-ci/settings/secrets/actions and create the secrets as shown below, also update the TF_API_TOKEN which you have created before Enter all the secrets as shown above in your GitHub repo - AWS Secrets Key and Key ID, also update ASW_SESSION_TOKEN if you are using it. Add your infrastructure details in Terraform cloud On the Terraform Cloud, under the Workspacebigip-ciupdate the terraform variables listed below. BIG-IP variables - address, port, username, password. VariabledeployWAFis the AS3 configuration file that has a WAF policy referenced. You can customize this to meet your requirements. Before proceeding, review themain.tfanddeployWAF.jsonand customize it as per your deployment. cat main.tf terraform { required_providers { bigip = { source= "F5Networks/bigip" version = "1.4.0" } } backend "remote" { organization = "SCStest" workspaces { name = "bigip-ci" } } } provider "aws" { region = "us-west-2" } provider "bigip" { address= "https://${var.address}:${var.port}" username = var.username password = var.password } # deploy application using as3 resource "bigip_as3" "DeployApp" { as3_json = file(var.deployWAF) } Testing the CI workflow Our environment is now setup such thatthe Actions workflow gets triggedwhen there is a PULL REQUEST to the MASTER branch. Let’s test it out by committing a minor change. On the terminal execute: # check you are pointing to which branch git status # checkout to the dev branch git checkout dev. # add the changed files to gitHub, this will be your main.tf and deployWAF.json git add . git commit -m "deployWAF" # push the files to dev branch git push Create a pull request, merge to master to trigger the Actions Workflow Merge the pull request: Click on the ‘Actions’ tab in your GitHub repo to see the trigged workflow as shown below: When you click on the workflow, you can see the summary of jobs that are executed. A green check means the run was successful. The GitHub Actions Workflow The above merge of the Pull Request executed the sample workflow at .github/workflow/bigip-ci.yml. As shown below, the workflow has various ‘event definitions’ and ‘jobs’. In this case the event is onpull_request, andthe jobs are terraform runs. The jobs listed in this workflow are:Setup Terraform, Terraform Init, Terraform Plan and Terraform Apply. name: "bigip_waf" on: push: branches: - master pull_request: jobs: terraform: name: "Terraform" runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - name: Setup Terraform uses: hashicorp/setup-terraform@v1 with: # terraform_version: 0.13.0: cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }} - name: Terraform Init id: init run: terraform init - name: Terraform Plan id: plan if: github.event_name == 'pull_request' run: terraform plan -no-color continue-on-error: true - uses: actions/github-script@0.9.0 if: github.event_name == 'pull_request' env: PLAN: "terraform\n${{ steps.plan.outputs.stdout }}" with: github-token: ${{ secrets.GITHUB_TOKEN }} script: | const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\` #### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\` #### Terraform Plan 📖\`${{ steps.plan.outcome }}\` <details><summary>Show Plan</summary> \`\`\`${process.env.PLAN}\`\`\` </details> *Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`*`; github.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: output }) - name: Terraform Plan Status if: steps.plan.outcome == 'failure' run: exit 1 - name: Terraform Apply if: github.ref == 'refs/heads/master' && github.event_name == 'push' run: terraform apply -auto-approve Conclusion: BIG-IP policy management can be very easily included into your CI/CD pipelines. You can use any tools such as Jenkins, GitHub actions etc. I showed how to use GitHub Actions workflow and Terraform cloud for my setup. You can use this in your environment using the same workflow, and by changing the terraform file depending upon your BIG-IP configurations. Using similar concepts, you can test BIG-IP configurations in every state of the application delivery lifecycle – development, staging as well as in production. I hope this is helpful. Please share your thoughts and comments below.1.3KViews0likes0CommentsNGINX ingress proxy for Consul Service Mesh
Disclaimer: This blog post is an abridged (NGINX specific) version of the original blog by John Eikenberry at HashiCorp. Consul service mesh provides service-to-service connection authorization and encryption using mutual Transport Layer Security (mTLS). Secure mesh networks require ingress points to act as gateways to enable external traffic to communicate with the internal services. NGINX supports mTLS and can be configured as a robust ingress proxy for your mesh network. For Consul service mesh environments, you can use Consul-template to configure NGINX as a native ingress proxy. This can be beneficial when performance is of utmost importance. Below you can walk through a complete example setup. The examples use consul-template to dynamically generate the proxy configuration and the required certificates for the NGINX to communicate directly with the services inside the mesh network. This blog post is only meant to demonstrate features and may not be a production ready or secure deployment. Each of the services are configured for demonstration purposes and will run in the foreground and output its logs to that console. They are all designed to run from files in the same directory. In this blog post, we use the term ingress proxy to describe Nginx. When we use the term sidecar proxy, we mean the Consul Connect proxy for the internal service. Common Infrastructure The example setup below requires Consul, NGINX, and Python. The former is available at the link provided, NGINX and Python should be easily installable with the package manager on any Linux system. The Ingress proxy needs a service to proxy to, for that you need Consul, a service (E.g., a simple webserver), and the Connect sidecar proxy to connect it to the mesh. The ingress proxy will also need the certificates to make the mTLS connection. Start Consul Connect, Consul’s service mesh feature, requires Consul 1.2.0 or newer. First start your agent. $ consul agent -dev Start a Webserver For the webserver you will use Python’s simple built-in server included in most Linux distributions and Macs. $ python -m SimpleHTTPServer Python’s webserver will listen on port 8000 and publish an index.html by default. For demo purposes, create an index.html for it to publish. $ echo "Hello from inside the mesh." > index.html Next, you’ll need to register your “webserver” with Consul. Note, the Connect option registers a sidecar proxy with the service. $ echo '{ "service": { "name": "webserver", "connect": { "sidecarservice": {} }, "port": 8000 } }' > webserver.json $ consul services register webserver.json You also need to start the sidecar. $ consul connect proxy -sidecar-for webserver Note, the service is using the built-in sidecar proxy. In production you would probably want to consider using Envoy or NGINX instead. Create the Certificate File Templates To establish a connection with the mesh network, you’ll need to use consul-template to fetch the CA root certificate from the Consul servers as well as the applications leaf certificates, which Consul will generate. You need to create the templates that consul-template will use to generate the certificate files needed for NGINX ingress proxy. The template functions caRoots and caLeaf require consul-template version 0.23.0 or newer. Note, the name, “nginx” used in the leaf certificate templates. It needs to match the name used to register the ingress proxy services with Consul below. ca.crt NGINX requires the CA certificate. $ echo '{{range caRoots}}{{.RootCertPEM}}{{end}}' > ca.crt.tmpl Nginx cert.pem and cert.key NGINX requires the certificate and key to be in separate files. $ echo '{{with caLeaf "nginx"}}{{.CertPEM}}{{end}}' > cert.pem.tmpl $ echo '{{with caLeaf "nginx"}}{{.PrivateKeyPEM}}{{end}}' > cert.key.tmpl Setup The Ingress Proxies NGINX proxy service needs to be registered with Consul and have a templated configuration file. In each of the templated configuration files, the connect function is called and returns a list of the services with the passed name “webserver” (matching the registered service above). The list Consul creates is used to create the list of back-end servers to which the ingress proxy connections. The ports are set to different values so you can have both proxies running at the same time. First, register the NGINX ingress proxy service with the Consul servers. $ echo '{ "service": { "name": "nginx", "port": 8081 } }' > nginx-service.json $ consul services register nginx-service.json Next, configure the NGINX configuration file template. Set it to listen on the registered port and route to the Connect-enabled servers retrieved by the Connect call. nginx-proxy.conf.tmpl $ cat > nginx-proxy.conf.tmpl << EOF daemon off; master_process off; pid nginx.pid; error_log /dev/stdout; events {} http { access_log /dev/stdout; server { listen 8081 defaultserver; location / { {{range connect "webserver"}} proxy_pass https://{{.Address}}:{{.Port}}; {{end}} # these refer to files written by templates above proxy_ssl_certificate cert.pem; proxy_ssl_certificate_key cert.key; proxy_ssl_trusted_certificate ca.crt; } } } EOF Consul Template The final piece of the puzzle, tying things together are the consul-template configuration files! These are written in HCL, the Hashicorp Configuration Language, and lay out the commands used to run the proxy, the template files, and their destination files. nginx-ingress-config.hcl $ cat > nginx-ingress-config.hcl << EOF exec { command = "/usr/sbin/nginx -p . -c nginx-proxy.conf" } template { source = "ca.crt.tmpl" destination = "ca.crt" } template { source = "cert.pem.tmpl" destination = "cert.pem" } template { source = "cert.key.tmpl" destination = "cert.key" } template { source = "nginx-proxy.conf.tmpl" destination = "nginx-proxy.conf" } EOF Running and Testing You are now ready to run the consul-template managed NGINX ingress proxy. When you run consul-template, it will process each of the templates, fetching the certificate and server information from consul as needed, and render them to their destination files on disk. Once all the templates have been successfully rendered it will run the command starting the proxy. Run the NGINX managing consul-template instance. $ consul-template -config nginx-ingress-config.hcl Now with everything running, you are finally ready to test the proxies. $ curl http://localhost:8081 Hello from inside the mesh! Conclusion F5 technologies work very well in your Consul environments. In this blog post you walked through setting up NGINX to work as a proxy to provide ingress to services contained in a Consul service mesh. Please reach out to me and the F5-HashiCorp alliance team here if you have any questions, feature requests, or any feedback to make this solution better.5.3KViews0likes0CommentsLightboard Lessons: Zero Touch Application Deployments with Terraform, Consul, and AS3
In this lightboard lesson, I show how you can move from the manual work of traditional app deployments to the automated goodness of zero touch app deployments! This demo solution was shown in a Hashicorp webinar featuring our own Eric Chen, and utilizes Hashicorp's Terraform and Consul applications, as well as the AS3 component of the F5 Automation Toolchain. Resources This demo in detail F5 Terraform automation lab (hosted by WWT, registration required) F5 Resources for Terraform Hands-on Intro to Infrastructure as Code Using Terraform Ready to go! Deploying F5 Infrastructure Using Terraform F5 and Hashicorp Essentials (article series on Terraform, Consul, and Vault)1.2KViews0likes2CommentsManage BIG-IPs in Azure using Terraform Cloud
Introduction In this article I’ll outline a suite of demonstration resources designed to help you and your IT team explore the possibilities of applying DevOps practices in your own environments. The demonstration resources described below show how tools likeGit, HashiCorpTerraform, HashiCorpSentinel, ChefInspecand F5'sAutomation Toolchaincan be used to introduce some of the practices listed above to F5 BIG-IPs and the IT services they help deliver. By following along with theREADMEin thedemonstration repositoryand the video walk-throughs listed below, you should be able to run this demonstration on your own. Software Delivery Key Practices IT Industry research, such asAccelerate, shows improving a company's ability to deliver software has a significant positive benefit to their overall success. The following practices and design principles are cornerstones to that improvement. Version control of code and configuration Automation of Deployment Automation of Testing and Test Data Management "Shifting Left" on Security Loosely Coupled Architectures Pro-active Notification Caveats These repositories use simplifying demonstration shortcuts for password, key, and network security. Production-ready enterprise designs and workflows should be used in place of these shortcuts. DO NOT ASSUME THAT THE CODE AND CONFIGURATION IN THESE REPOSITORIES IS PRODUCTION-READY The particular source control approach shown in this demonstration is one of many. Before using this approach to support your Infrastructure as Code and Configuration Management assets and workflows, you should learn aboutdifferent patterns of source code managementand determine what best fits your team's needs. A variety of tools are used in this demonstration. In most cases they are not exclusively required and can be replaced with other similar tools. The demonstration uses a licensed version of Terraform Cloud in order to demonstrate the capabilities of HashiCorp Sentinel. If you are using the free version of Terraform Cloud you won't be able to try the policy compliance use-cases, and the rest of the demonstration code should work as expected. Setting up your demonstration automation host Before running the demonstration code, you'll need to set up the IDE host and the Azure account. Instructions for those steps arehere Video walk-throughs Fork the repository and open it in Visual Studio Code (1m36s) Once the tools are installed, you can create your own copy of the repository and open it in your IDE. In the videos, Visual Studio Code is used as the IDE. In order to follow along, you'll need to create your own repository in order to set up the Terraform Cloud configuration and make your own adjustments to build configuration (e.g. the number of application servers deployed) Set up a Terraform Cloud workspace (1m38s) Before running the Terraform Cloud workflow, a Terraform Cloud workspace is required. This video steps you through manually configuring the workspace and linking it to your cloned repository. Programmatically set up Terraform Cloud workspaces for production, test, and development (10m40s) Setting up the workspaces programmatically has the benefits of rapid consistent results and executable knowledge in the form of scripts and configuration files. In this video we step you through programmatically building workspaces for production, test, and development environments usingthis repository. We also programmatically configure simple source-controlled compliance Sentinal policies. Initial build of production, test, and development (7m59s) Everything should be ready for the first build of your production, test, and development environments. In this video, we step you through manually triggering Terraform Cloud builds. In addition, we'll see the impact of Sentinel policies in use, how to override policies that have been triggered, and the audit trail that results. Automated testing of production (4m18s) Once your environment builds have completed, it's critical to validate that they are fit for use. In this video, we step you through a simple set of tests that validate the readiness of the F5 BIG-IPs built by the Terraform Cloud workflow. These tests are not comprehensive, but demonstrate the benefits of an executable "definition of done." The source of an updated version of the Inspec tests used in the demonstration ishere. Manual inspection of production (2m45s) In this video, we walk through the BIG-IPs that were built in the production environment. We inspect the virtual servers and their associated pools, noting the number of application servers that were built and joined to the pool. Programmatically add application servers and include them in the BIG-IP virtual server (8m6s) In this video, we explore the use-case of expanding the pool in the previously built production environment, using a simple change in source control. We'll see the Terraform Cloud workflow automatically trigger a new build based on a merge commit to your cloned repository. New application servers will be built and automatically added to the pool by F5'sService Discovery iApp. Update WAF from a source control repository(no video walk-through) We leave it as an exercise for the reader (or possibly an updated video) to look for the WAF deployed with the virtual server. The WAF is retrieved from source controlhere. In addition, you can experiment with changing the version of the WAF in theAS3 templatein the stanza shown below. Usable values for versions are 0.1.0, 0.1.1, 0.2.0, and 0.2.1. If you choose to do this, follow the same workflow shown in the previous video about scaling the number of application servers. "ASM_Policy": { "class": "WAF_Policy", "url": "https://github.com/mjmenger/waf-policy/raw/0.1.1/asm_policy.xml", "ignoreChanges": false } What's next? If you've followed along through the all of the use-cases in the demonstration repository, you have seen the following: Source-controlled build of an application environment, including BIG-IPs, virtual servers, pools, and WAF policies. Managed changes with logging of authoring and approvals. Automated scaling of application resources and BIG-IP configuration. Automated updates to BIG-IP WAF policies. If you want to realize the benefits of these practices for your IT service delivery, please reach out to your F5 account team.1KViews2likes0CommentsF5 BIG-IP as a Terminating Gateway for HashiCorp Consul
Our joint customers have asked for it, so the HashiCorp Consul team and the F5 BIG-IP teams have been working to provide this early look at the technology and configurations needed to have BIG-IP perform the Terminating Gateway functionality for HashiCorp Consul. A Bit of an Introduction HashiCorp Consul is a platform that is multi cloud and is used to secure service to service communication. You have all heard about microservices and how within an environment like Consul there is a need to secure and control microservice to microservice communications. But what happens when you want a similar level of security and control when a microservice inside the environment needs to communicate with a service outside of the environment. Enter F5 BIG-IP - the Enterprise Terminating Gateway solution for HashiCorp. HashiCorp has announced the GA of their Terminating Gateway functionality, and we here at F5 want to show our support for this milestone by showing the progress we have made to date in answering the requests of our joint customers. One of the requirements of a Terminating Gateway is that it must respect the security policies defined within Consul. HashiCorp calls these policies Intentions. What Should You Get Out of this Article This article is focused on how BIG-IP when acting as a Terminating Gateway can understand those Intentions and can securely apply those policies/Intentions to either allow or disallow microservice to microservice communications. Update We have been hard at work on this solution, and have created a method to automate the manual processes that I had detailed below. You can skip executing the steps below and jump directly to the new DevCentral Git repository for this solution. Feel free to read the below to get an understanding of the workflows we have automated using the code in the new DevCentral repo. And you can also check out this webinar to hear more about the solution and to see the developer, Shaun Empie, demo the automation of the solution. First Steps Before we dive into the iRulesLX that makes this possible, one must configure the BIG-IP Virtual server to secure the connectivity with mTLS, and configure the pool, profiles, and other configuration options necessary for one's environment. Many here on DevCentral have shown how F5 can perform mTLS with various solutions. Eric Chen has shown how to configure theBIG-IP to use mTLS with Slack.What I want to focus on is how to use an iRuleLX to extract the info necessary to respect the HashiCorp Intentions and allow or disallow a connection based on the HashiCorp Consul Intention. I have to give credit where credit is due. Sanjay Shitole is the one behind the scenes here at F5 along with Dan Callao and Blake Covarrubias from HashiCorp who have worked through the various API touch points, designed the workflow, and F5 Specific iRules and iRulesLX needed to make this function. Now for the Fun Part Once you get your Virtual Server and Pool created the way you would like them with the mTLS certificates etc., you can focus on creating the iLX Workspace where you will write the node.js code and iRules. You can follow the instructions here to create the iLX workspace, add an extension, and an LX plugin. Below is the tcl-based iRule that you will have to add to this workspace. To do this go to Local Traffic > iRules > LX Workspaces and find the workspace you had created in the steps above. In our example, we used "ConsulWorkSpace". Paste the text of the rule listed below into the text editor and click save file. There is one variable (sb_debug) you can change in this file depending on the level of logging you want done to the /var/log/ltm logs. The rest of the iRule Grabs the full SNI value from the handshake. This will be parsed later on in the node.js code to populate one of the variables needed for checking the intention of this connection in Consul. The next section grabs the certificate and stores it as a variable so we can later extract the serial_id and the spiffe, which are the other two variables needed to check the Consul Intention. The next step in the iRule is to pass these three variables via an RPC_HANDLE function to the Node.js code we will discuss below. The last section uses that same RPC_HANDLE to get responses back from the node code and either allows or disallows the connection based on the value of the Consul Intention. when RULE_INIT { #set static::sb_debug to 2 if you want to enable logging to troubleshoot this iRule, 1 for informational messages, otherwise set to 0 set static::sb_debug 0 if {$static::sb_debug > 1} { log local0. "rule init" } } when CLIENTSSL_HANDSHAKE { if { [SSL::extensions exists -type 0] } { binary scan [SSL::extensions -type 0] {@9A*} sni_name if {$static::sb_debug > 1} { log local0. "sni name: ${sni_name}"} } # use the ternary operator to return the servername conditionally if {$static::sb_debug > 1} { log local0. "sni name: [expr {[info exists sni_name] ? ${sni_name} : {not found} }]"} } when CLIENTSSL_CLIENTCERT { if {$static::sb_debug > 1} {log local0. "In CLIENTSSL_CLIENTCERT"} set client_cert [SSL::cert 0] } when HTTP_REQUEST { set serial_id "" set spiffe "" set log_prefix "[IP::remote_addr]:[TCP::remote_port clientside] [IP::local_addr]:[TCP::local_port clientside]" if { [SSL::cert count] > 0 } { HTTP::header insert "X-ENV-SSL_CLIENT_CERTIFICATE" [X509::whole [SSL::cert 0]] set spiffe [findstr [X509::extensions [SSL::cert 0]] "Subject Alternative Name" 39 ","] if {$static::sb_debug > 1} { log local0. "<$log_prefix>: SAN: $spiffe"} set serial_id [X509::serial_number $client_cert] if {$static::sb_debug > 1} { log local0. "<$log_prefix>: Serial_ID: $serial_id"} } if {$static::sb_debug > 1} { log local0.info "here is spiffe:$spiffe" } set RPC_HANDLE [ILX::init "SidebandPlugin" "SidebandExt"] if {[catch {ILX::call $RPC_HANDLE "func" $sni_name $spiffe $serial_id} result]} { if {$static::sb_debug > 1} { log local0.error"Client - [IP::client_addr], ILX failure: $result"} HTTP::respond 500 content "Internal server error: Backend server did not respond." return } ## return proxy result if { $result eq 1 }{ if {$static::sb_debug > 1} {log local0. "Is the connection authorized: $result"} } else { if {$static::sb_debug > 1} {log local0. "Connection is not authorized: $result"} HTTP::respond 400 content '{"status":"Not_Authorized"}'"Content-Type" "application/json" } } Next is to copy the text of the node.js code below and paste it into the index.js file using the GUI. Here though there are two lines you will have to edit that are unique to your environment. Those two lines are the hostname and the port in the "const options =" section. These values will be the IP and port on which your Consul Server is listening for API calls. This node.js takes the three values the tcl-based iRule passed to it, does some regex magic on the sni_name value to get the target variable that is used to check the Consul Intention. It does this by crafting an API call to the consul server API endpoint that includes the Target, the ClientCertURI, and the ClientCertSerial values. The Consul Server responds back, and the node.js code captures that response, and passes a value back to the tcl-based iRule, which means the communication is disallowed or allowed. const http = require("http"); const f5 = require("f5-nodejs"); // Initialize ILX Server var ilx = new f5.ILXServer(); ilx.addMethod('func', function(req, res) { var retstr = ""; var sni_name = req.params()[0]; var spiffe = req.params()[1]; var serial_id = req.params()[2]; const regex = /[^.]*/; let targetarr = sni_name.match(regex); target = targetarr.toString(); console.log('My Spiffe ID is: ', spiffe); console.log('My Serial ID is: ', serial_id); //Construct request payload var data = JSON.stringify({ "Target": target, "ClientCertURI": spiffe, "ClientCertSerial": serial_id }); //Strip off newline character(s) data = data.replace(/\\n/g, '') ; // Construct connection settings const options = { hostname: '10.0.0.100', port: 8500, path: '/v1/agent/connect/authorize', method: 'POST', headers: { 'Content-Type': 'application/json', 'Content-Length': data.length } }; // Construct Consul sideband HTTP Call const myreq = http.request(options, res2 => { console.log(`Posting Json to Consul -------> statusCode: ${res2.statusCode}`); res2.on('data', d => { //capture response payload process.stdout.write(d); retstr += d; }); res2.on('end', d => { //Check response for Valid Authorizaion and return back to TCL iRule var isVal = retstr.includes(":true"); res.reply(isVal); }); }); myreq.on('error', error => { console.error(error); }); // Intiate Consul Call myreq.write(data); myreq.end(); }); // Start ILX listener ilx.listen(); This iRulesLX solution will allow for multiple sources to connect to the BIG-IP Virtual Server, exchange mTLS info, but only allow the connection once the Consul Intentions are verified. If your Intentions looked something similar to the ones below the Client microservice would be allowed to communicate with the services behind the BIG-IP, whereas the socialapp microservice would be blocked at the BIG-IP since we are capturing and respecting the Consul Intentions. So now that we have shown how BIG-IP acts as terminating Gateway for HashiCorp Consul - all the while respecting the Consul Intentions, What’s next? Well next is for F5 and HashiCorp to continue working together on this solution. We intend to take this a level further by creating a prototype that automates the process. The Automated prototype will have a mechanism to listen to changes within the Consul API server, and when a new service is defined behind the BIG-IP acting as the terminating Gateway, the Virtual server, the pools, the ssl profiles, and the iRulesLX workspace can be automatically configured via an AS3 declaration. What Can You Do Next You can find all of the iRules used in this solution in our DevCentral Github repo. Please reach out to me and the F5 HashiCorp Business Development team hereif you have any questions, feature requests, or any feedback to make this solution better.1KViews3likes0Comments