Consul
7 TopicsPushing Updates to BIG-IP w/ HashiCorp Consul Terraform Sync
HashiCorp Consul Terraform Sync (CTS) is a tool/daemon that allows you to push updates to your BIG-IP devices in near real-time (this is also referred to as Network Infrastructure Automation).This helps in scenarios where you want to preserve an existing set of network/security policies and deliver updates to application services faster. Consul Terraform Sync Consul is a service registry that keeps track of where a service is (10.1.20.10:80 and 10.1.20.11:80) and the health of the service (responding to HTTP requests).Terraform allows you to push updates to your infrastructure, but usually in a one and done fashion (fire and forget).NIA is a symbiotic relationship of Terraform and Consul.It allows you to track changes via Consul (new node added/removed from a service) and push the change to your infrastructure via Terraform. Putting CTS in Action We can use CTS to help solve a common problem of how to enable a network/security team to allow an application team to dynamically update the pool members for their application.This will be accomplished by defining a virtual server on the BIG-IP and then enabling the application team to update the state of the pool members (but not allow them to modify the virtual server itself). Defining the Virtual Server The first step is that we want to define what services we want.In this example we use a FAST template to generate an AS3 declaration that will generate a set of Event-Driven Service Discovery pools.The Event-Driven pools will be updated by NIA and we will apply an iControl REST RBAC policy to restrict updates. The FAST template takes the inputs of “tenant”, “virtual server IP”, and “services”. This generates a Virtual Server with 3 pools. Event-Driven Service Discovery Each of the pools is created using Event-Driven Service Discovery that creates a new API endpoint with a path of: /mgmt/shared/service-discovery/task/ ~[tenant]~EventDrivenApps~[service]_pool/nodes You can send a POST API call these to add/remove pool members (it handles creation/deletion of nodes).The format of the API call is an array of node objects: [{“id”:”[identifier]”,”ip”:”[ip address]”,”port”:[port (optional)]}] We can use iControl REST RBAC to limit access to a user to only allow updates via the Event-Driven API. Creating a CTS Task NIA can make use of existing Terraform providers including the F5 BIG-IP Provider.We create our own module that makes use of the Event-Driven API ... resource "bigip_event_service_discovery" "pools" { for_each = local.service_ids taskid = "~EventDriven~EventDrivenApps~${each.key}_pool" dynamic "node" { for_each = local.groups[each.key] content { id = node.value.node ip = node.value.node_address port = node.value.port } } } ... Once NIA is run we can see it updating the BIG-IP - Finding f5networks/bigip versions matching "~> 1.5.0"... ... module.AS3.bigip_event_service_discovery.pools["app003"]: Creating... … module.AS3.bigip_event_service_discovery.pools["app002"]: Creation complete after 0s [id=~EventDriven~EventDrivenApps~app002_pool] Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Scaling up the environment to go from 3 pool members to 10 you can see NIA pick-up the changes and apply them to the BIG-IP in near real-time. module.AS3.bigip_event_service_discovery.pools["app001"]: Refreshing state... [id=~EventDriven~EventDrivenApps~app001_pool] … module.AS3.bigip_event_service_discovery.pools["app002"]: Modifying... [id=~EventDriven~EventDrivenApps~app002_pool] … module.AS3.bigip_event_service_discovery.pools["app002"]: Modifications complete after 0s [id=~EventDriven~EventDrivenApps~app002_pool] Apply complete! Resources: 0 added, 3 changed, 0 destroyed. NIA can be run interactively at the command-line, but you can also run it as a system service (i.e. under systemd). Alternate Method In the previous example you saw an example of using AS3 to define the Virtual Server resource.You can also opt to use Event-Driven API directly on an existing BIG-IP pool (just be warned that it will obliterate any existing pool members once you send an update via the Event-Driven nodes API).To create a new Event-Driven pool you would send a POST call with the following payload to /mgmt/shared/service-discovery/task { "id": "test_pool", "schemaVersion": "1.0.0", "provider": "event", "resources": [ { "type": "pool", "path": "/Common/test_pool", "options": { "servicePort": 8080 } } ], "nodePrefix": "/Common/" } You would then be able to access it with the id of “test_pool”.To remove it from Event-Driven Service Discovery you would send a DELETE call to /mgmt/shared/service-discovery/task/test_pool Separation of Concerns In this example you saw how CTS could be used to separate network, security, and application tasks, but these could be easily combined using NIA just as easily.Consul Terraform Sync is now generally available, and I look forward to seeing how you can leverage it.For an example that is similar to this article you can take a look at the following GitHub repo that has an example of using NIA.You can also view another example on the Terraform registry as well.1.6KViews1like8CommentsNGINX ingress proxy for Consul Service Mesh
Disclaimer: This blog post is an abridged (NGINX specific) version of the original blog by John Eikenberry at HashiCorp. Consul service mesh provides service-to-service connection authorization and encryption using mutual Transport Layer Security (mTLS). Secure mesh networks require ingress points to act as gateways to enable external traffic to communicate with the internal services. NGINX supports mTLS and can be configured as a robust ingress proxy for your mesh network. For Consul service mesh environments, you can use Consul-template to configure NGINX as a native ingress proxy. This can be beneficial when performance is of utmost importance. Below you can walk through a complete example setup. The examples use consul-template to dynamically generate the proxy configuration and the required certificates for the NGINX to communicate directly with the services inside the mesh network. This blog post is only meant to demonstrate features and may not be a production ready or secure deployment. Each of the services are configured for demonstration purposes and will run in the foreground and output its logs to that console. They are all designed to run from files in the same directory. In this blog post, we use the term ingress proxy to describe Nginx. When we use the term sidecar proxy, we mean the Consul Connect proxy for the internal service. Common Infrastructure The example setup below requires Consul, NGINX, and Python. The former is available at the link provided, NGINX and Python should be easily installable with the package manager on any Linux system. The Ingress proxy needs a service to proxy to, for that you need Consul, a service (E.g., a simple webserver), and the Connect sidecar proxy to connect it to the mesh. The ingress proxy will also need the certificates to make the mTLS connection. Start Consul Connect, Consul’s service mesh feature, requires Consul 1.2.0 or newer. First start your agent. $ consul agent -dev Start a Webserver For the webserver you will use Python’s simple built-in server included in most Linux distributions and Macs. $ python -m SimpleHTTPServer Python’s webserver will listen on port 8000 and publish an index.html by default. For demo purposes, create an index.html for it to publish. $ echo "Hello from inside the mesh." > index.html Next, you’ll need to register your “webserver” with Consul. Note, the Connect option registers a sidecar proxy with the service. $ echo '{ "service": { "name": "webserver", "connect": { "sidecarservice": {} }, "port": 8000 } }' > webserver.json $ consul services register webserver.json You also need to start the sidecar. $ consul connect proxy -sidecar-for webserver Note, the service is using the built-in sidecar proxy. In production you would probably want to consider using Envoy or NGINX instead. Create the Certificate File Templates To establish a connection with the mesh network, you’ll need to use consul-template to fetch the CA root certificate from the Consul servers as well as the applications leaf certificates, which Consul will generate. You need to create the templates that consul-template will use to generate the certificate files needed for NGINX ingress proxy. The template functions caRoots and caLeaf require consul-template version 0.23.0 or newer. Note, the name, “nginx” used in the leaf certificate templates. It needs to match the name used to register the ingress proxy services with Consul below. ca.crt NGINX requires the CA certificate. $ echo '{{range caRoots}}{{.RootCertPEM}}{{end}}' > ca.crt.tmpl Nginx cert.pem and cert.key NGINX requires the certificate and key to be in separate files. $ echo '{{with caLeaf "nginx"}}{{.CertPEM}}{{end}}' > cert.pem.tmpl $ echo '{{with caLeaf "nginx"}}{{.PrivateKeyPEM}}{{end}}' > cert.key.tmpl Setup The Ingress Proxies NGINX proxy service needs to be registered with Consul and have a templated configuration file. In each of the templated configuration files, the connect function is called and returns a list of the services with the passed name “webserver” (matching the registered service above). The list Consul creates is used to create the list of back-end servers to which the ingress proxy connections. The ports are set to different values so you can have both proxies running at the same time. First, register the NGINX ingress proxy service with the Consul servers. $ echo '{ "service": { "name": "nginx", "port": 8081 } }' > nginx-service.json $ consul services register nginx-service.json Next, configure the NGINX configuration file template. Set it to listen on the registered port and route to the Connect-enabled servers retrieved by the Connect call. nginx-proxy.conf.tmpl $ cat > nginx-proxy.conf.tmpl << EOF daemon off; master_process off; pid nginx.pid; error_log /dev/stdout; events {} http { access_log /dev/stdout; server { listen 8081 defaultserver; location / { {{range connect "webserver"}} proxy_pass https://{{.Address}}:{{.Port}}; {{end}} # these refer to files written by templates above proxy_ssl_certificate cert.pem; proxy_ssl_certificate_key cert.key; proxy_ssl_trusted_certificate ca.crt; } } } EOF Consul Template The final piece of the puzzle, tying things together are the consul-template configuration files! These are written in HCL, the Hashicorp Configuration Language, and lay out the commands used to run the proxy, the template files, and their destination files. nginx-ingress-config.hcl $ cat > nginx-ingress-config.hcl << EOF exec { command = "/usr/sbin/nginx -p . -c nginx-proxy.conf" } template { source = "ca.crt.tmpl" destination = "ca.crt" } template { source = "cert.pem.tmpl" destination = "cert.pem" } template { source = "cert.key.tmpl" destination = "cert.key" } template { source = "nginx-proxy.conf.tmpl" destination = "nginx-proxy.conf" } EOF Running and Testing You are now ready to run the consul-template managed NGINX ingress proxy. When you run consul-template, it will process each of the templates, fetching the certificate and server information from consul as needed, and render them to their destination files on disk. Once all the templates have been successfully rendered it will run the command starting the proxy. Run the NGINX managing consul-template instance. $ consul-template -config nginx-ingress-config.hcl Now with everything running, you are finally ready to test the proxies. $ curl http://localhost:8081 Hello from inside the mesh! Conclusion F5 technologies work very well in your Consul environments. In this blog post you walked through setting up NGINX to work as a proxy to provide ingress to services contained in a Consul service mesh. Please reach out to me and the F5-HashiCorp alliance team here if you have any questions, feature requests, or any feedback to make this solution better.5.3KViews0likes0CommentsLightboard Lessons: Zero Touch Application Deployments with Terraform, Consul, and AS3
In this lightboard lesson, I show how you can move from the manual work of traditional app deployments to the automated goodness of zero touch app deployments! This demo solution was shown in a Hashicorp webinar featuring our own Eric Chen, and utilizes Hashicorp's Terraform and Consul applications, as well as the AS3 component of the F5 Automation Toolchain. Resources This demo in detail F5 Terraform automation lab (hosted by WWT, registration required) F5 Resources for Terraform Hands-on Intro to Infrastructure as Code Using Terraform Ready to go! Deploying F5 Infrastructure Using Terraform F5 and Hashicorp Essentials (article series on Terraform, Consul, and Vault)1.2KViews0likes2CommentsF5 BIG-IP as a Terminating Gateway for HashiCorp Consul
Our joint customers have asked for it, so the HashiCorp Consul team and the F5 BIG-IP teams have been working to provide this early look at the technology and configurations needed to have BIG-IP perform the Terminating Gateway functionality for HashiCorp Consul. A Bit of an Introduction HashiCorp Consul is a platform that is multi cloud and is used to secure service to service communication. You have all heard about microservices and how within an environment like Consul there is a need to secure and control microservice to microservice communications. But what happens when you want a similar level of security and control when a microservice inside the environment needs to communicate with a service outside of the environment. Enter F5 BIG-IP - the Enterprise Terminating Gateway solution for HashiCorp. HashiCorp has announced the GA of their Terminating Gateway functionality, and we here at F5 want to show our support for this milestone by showing the progress we have made to date in answering the requests of our joint customers. One of the requirements of a Terminating Gateway is that it must respect the security policies defined within Consul. HashiCorp calls these policies Intentions. What Should You Get Out of this Article This article is focused on how BIG-IP when acting as a Terminating Gateway can understand those Intentions and can securely apply those policies/Intentions to either allow or disallow microservice to microservice communications. Update We have been hard at work on this solution, and have created a method to automate the manual processes that I had detailed below. You can skip executing the steps below and jump directly to the new DevCentral Git repository for this solution. Feel free to read the below to get an understanding of the workflows we have automated using the code in the new DevCentral repo. And you can also check out this webinar to hear more about the solution and to see the developer, Shaun Empie, demo the automation of the solution. First Steps Before we dive into the iRulesLX that makes this possible, one must configure the BIG-IP Virtual server to secure the connectivity with mTLS, and configure the pool, profiles, and other configuration options necessary for one's environment. Many here on DevCentral have shown how F5 can perform mTLS with various solutions. Eric Chen has shown how to configure theBIG-IP to use mTLS with Slack.What I want to focus on is how to use an iRuleLX to extract the info necessary to respect the HashiCorp Intentions and allow or disallow a connection based on the HashiCorp Consul Intention. I have to give credit where credit is due. Sanjay Shitole is the one behind the scenes here at F5 along with Dan Callao and Blake Covarrubias from HashiCorp who have worked through the various API touch points, designed the workflow, and F5 Specific iRules and iRulesLX needed to make this function. Now for the Fun Part Once you get your Virtual Server and Pool created the way you would like them with the mTLS certificates etc., you can focus on creating the iLX Workspace where you will write the node.js code and iRules. You can follow the instructions here to create the iLX workspace, add an extension, and an LX plugin. Below is the tcl-based iRule that you will have to add to this workspace. To do this go to Local Traffic > iRules > LX Workspaces and find the workspace you had created in the steps above. In our example, we used "ConsulWorkSpace". Paste the text of the rule listed below into the text editor and click save file. There is one variable (sb_debug) you can change in this file depending on the level of logging you want done to the /var/log/ltm logs. The rest of the iRule Grabs the full SNI value from the handshake. This will be parsed later on in the node.js code to populate one of the variables needed for checking the intention of this connection in Consul. The next section grabs the certificate and stores it as a variable so we can later extract the serial_id and the spiffe, which are the other two variables needed to check the Consul Intention. The next step in the iRule is to pass these three variables via an RPC_HANDLE function to the Node.js code we will discuss below. The last section uses that same RPC_HANDLE to get responses back from the node code and either allows or disallows the connection based on the value of the Consul Intention. when RULE_INIT { #set static::sb_debug to 2 if you want to enable logging to troubleshoot this iRule, 1 for informational messages, otherwise set to 0 set static::sb_debug 0 if {$static::sb_debug > 1} { log local0. "rule init" } } when CLIENTSSL_HANDSHAKE { if { [SSL::extensions exists -type 0] } { binary scan [SSL::extensions -type 0] {@9A*} sni_name if {$static::sb_debug > 1} { log local0. "sni name: ${sni_name}"} } # use the ternary operator to return the servername conditionally if {$static::sb_debug > 1} { log local0. "sni name: [expr {[info exists sni_name] ? ${sni_name} : {not found} }]"} } when CLIENTSSL_CLIENTCERT { if {$static::sb_debug > 1} {log local0. "In CLIENTSSL_CLIENTCERT"} set client_cert [SSL::cert 0] } when HTTP_REQUEST { set serial_id "" set spiffe "" set log_prefix "[IP::remote_addr]:[TCP::remote_port clientside] [IP::local_addr]:[TCP::local_port clientside]" if { [SSL::cert count] > 0 } { HTTP::header insert "X-ENV-SSL_CLIENT_CERTIFICATE" [X509::whole [SSL::cert 0]] set spiffe [findstr [X509::extensions [SSL::cert 0]] "Subject Alternative Name" 39 ","] if {$static::sb_debug > 1} { log local0. "<$log_prefix>: SAN: $spiffe"} set serial_id [X509::serial_number $client_cert] if {$static::sb_debug > 1} { log local0. "<$log_prefix>: Serial_ID: $serial_id"} } if {$static::sb_debug > 1} { log local0.info "here is spiffe:$spiffe" } set RPC_HANDLE [ILX::init "SidebandPlugin" "SidebandExt"] if {[catch {ILX::call $RPC_HANDLE "func" $sni_name $spiffe $serial_id} result]} { if {$static::sb_debug > 1} { log local0.error"Client - [IP::client_addr], ILX failure: $result"} HTTP::respond 500 content "Internal server error: Backend server did not respond." return } ## return proxy result if { $result eq 1 }{ if {$static::sb_debug > 1} {log local0. "Is the connection authorized: $result"} } else { if {$static::sb_debug > 1} {log local0. "Connection is not authorized: $result"} HTTP::respond 400 content '{"status":"Not_Authorized"}'"Content-Type" "application/json" } } Next is to copy the text of the node.js code below and paste it into the index.js file using the GUI. Here though there are two lines you will have to edit that are unique to your environment. Those two lines are the hostname and the port in the "const options =" section. These values will be the IP and port on which your Consul Server is listening for API calls. This node.js takes the three values the tcl-based iRule passed to it, does some regex magic on the sni_name value to get the target variable that is used to check the Consul Intention. It does this by crafting an API call to the consul server API endpoint that includes the Target, the ClientCertURI, and the ClientCertSerial values. The Consul Server responds back, and the node.js code captures that response, and passes a value back to the tcl-based iRule, which means the communication is disallowed or allowed. const http = require("http"); const f5 = require("f5-nodejs"); // Initialize ILX Server var ilx = new f5.ILXServer(); ilx.addMethod('func', function(req, res) { var retstr = ""; var sni_name = req.params()[0]; var spiffe = req.params()[1]; var serial_id = req.params()[2]; const regex = /[^.]*/; let targetarr = sni_name.match(regex); target = targetarr.toString(); console.log('My Spiffe ID is: ', spiffe); console.log('My Serial ID is: ', serial_id); //Construct request payload var data = JSON.stringify({ "Target": target, "ClientCertURI": spiffe, "ClientCertSerial": serial_id }); //Strip off newline character(s) data = data.replace(/\\n/g, '') ; // Construct connection settings const options = { hostname: '10.0.0.100', port: 8500, path: '/v1/agent/connect/authorize', method: 'POST', headers: { 'Content-Type': 'application/json', 'Content-Length': data.length } }; // Construct Consul sideband HTTP Call const myreq = http.request(options, res2 => { console.log(`Posting Json to Consul -------> statusCode: ${res2.statusCode}`); res2.on('data', d => { //capture response payload process.stdout.write(d); retstr += d; }); res2.on('end', d => { //Check response for Valid Authorizaion and return back to TCL iRule var isVal = retstr.includes(":true"); res.reply(isVal); }); }); myreq.on('error', error => { console.error(error); }); // Intiate Consul Call myreq.write(data); myreq.end(); }); // Start ILX listener ilx.listen(); This iRulesLX solution will allow for multiple sources to connect to the BIG-IP Virtual Server, exchange mTLS info, but only allow the connection once the Consul Intentions are verified. If your Intentions looked something similar to the ones below the Client microservice would be allowed to communicate with the services behind the BIG-IP, whereas the socialapp microservice would be blocked at the BIG-IP since we are capturing and respecting the Consul Intentions. So now that we have shown how BIG-IP acts as terminating Gateway for HashiCorp Consul - all the while respecting the Consul Intentions, What’s next? Well next is for F5 and HashiCorp to continue working together on this solution. We intend to take this a level further by creating a prototype that automates the process. The Automated prototype will have a mechanism to listen to changes within the Consul API server, and when a new service is defined behind the BIG-IP acting as the terminating Gateway, the Virtual server, the pools, the ssl profiles, and the iRulesLX workspace can be automatically configured via an AS3 declaration. What Can You Do Next You can find all of the iRules used in this solution in our DevCentral Github repo. Please reach out to me and the F5 HashiCorp Business Development team hereif you have any questions, feature requests, or any feedback to make this solution better.1KViews3likes0CommentsConsul Templating BIG-IP Services
Introduction HashiCorp tools can provide service discovery, monitoring, configuration templates, and secrets for F5 BIG-IP services.We’ll look at an example of using these tools to deploy BIG-IP services to provide high-availability, global server load balancing, and security.In this example we will Register Services: Add Pool Member #1 and #2 to Consul Define Services: Store AS3 Template in Consul (define BIG-IP LTM, DNS, ASM services) Extract Secrets: Use Consul Template to extract Certificate passphrase from Vault Generate Config: Combine Pool Member, AS3 Template, and Secret Deploy Services: Consul Template will push AS3 declaration (configurations) to BIG-IP devices HashiCorp Tools The HashiCorp tools that we will leverage are: Consul: Service Discovery, Monitoring, Key/Value Store Vault: Secrets Consul Template: Templating Consul Consul will provide an inventory of services (pool members) that the BIG-IP will be using.We will also store configuration information like IP addresses, SSL certificates/keys, and use Consul to monitor the health of the services. Vault Vault is a “secret” store.It is responsible for keeping sensitive data, like passwords, in encrypted formats.In this example we will store the certificate passphrase, used to encrypt the SSL certificate, in vault for safe keeping. Consul Template Consul Template is a tool that can read data from both Consul and Vault to generate configuration files.This is useful for generating F5 Application Services 3 (AS3) declarations that define the BIG-IP LTM, DNS, and ASM services that we would like to deploy. Building Out the Demo Environment 1. Register Services First we need to populate the services that we will use.Here we have deployed two Consul agents and each agent is responsible for registering and monitoring a NGINX service. 2. Define Services We also want to use the Consul key/value store to store additional information about IP addresses, SSL certificates, and custom AS3 declaration. The AS3 declaration defines the services that we want (LTM and ASM), but omits details like pool members and certificate passphrases that we will populate later. 3. Extract Secrets In the previous example we did not store the passphrase for the SSL certificate key in Consul.Without the passphrase the certificate is securely encrypted. We use Vault to store the passphrase and Consul Template to extract the secret value. This results in the secret value being encoded in the JSON declaration (base64) that will be sent to the BIG-IP. 4. Generate Config To populate the pool members, we use a Consul template that looks like the following And outputs the following AS3 declaration Observe that Consul Template has pulled the pool member IPs from Consul service registry.We are also updating the state of the pool member based on the health reported by Consul. We make use of a separate template for BIG-IP DNS services (GSLB). 5. Deploying Services Once Consul Template has generated the AS3 declaration we send these declarations to the BIG-IP to generate the services. Browsing the BIG-IP devices you can see the services deployed. LTM Services Pools are populated by pool members that are registered in Consul. Status is updated if you disable the service on one of the nodes. Template updates pool member to “disabled” state. ASM Services ASM Policy and Logging Profile. DNS Services Wide IP deployed. AS3 Declaration Fetching the AS3 declaration from the BIG-IP you can see that the passphrase is encrypted using the SecureVault feature of BIG-IP and is no longer in a reversible format. Templating from 1 to 2 is Easy In this example we deployed to two applications and two BIG-IP devices.Using these same templates we can easily expand this to four applications and 4 BIG-IP devices by updating our deployment. The following shows where we have added a second Data Center to Consul and configured a second BIG-IP device to respond to requests for app001. In review we leveraged a basket of HashiCorp tools to deploy BIG-IP services.If you liked this article please be sure to also look at “Increase CI/CD velocity through automation integration”.{{ end }}1.5KViews3likes0CommentsZero Touch App Delivery with F5 BIG-IP, Terraform and Consul - Webinar Q&A
The following post contains answers to questions asked during our webinar about Zero Touch Application Delivery with F5 BIG-IP, Terraform and Consul Q: What version of BIG-IP does the terraform provider work with? The version requirements can be found here: https://github.com/F5Networks/terraform-provider-bigip#requirements Q: Is there a plan for bootstrap and on-board F5 instances on Azure (and other cloud platforms) by using Terraform only i.e. without using Arm templates or any other toll in between? Similar to deploying and further managing Azure resources fully automated in Terraform. The provisioning in Azure can be done using the Azure resources also. The F5 BIG-IP provider can be used to manage configure and manage BIG-IP in all private and public clouds once it is provisioned. Here is an example: https://github.com/garyluf5/f5_terraform/blob/master/HA_via_lb_DO_AS3/main.tf Q: Were the pool members added dynamically by querying consul? But who made the change on the f5 device? Consul expose service catalog API for AS3. With this integration, changes are made on the F5 device automatically, as we have service discovery for consul feature configured on BIG-IP, this will poll the consul server for regular intervals, once the changes are noticed on consul server, the node worker on BIG-IP will pull the node information from the Consul server. Q: Are you saying that only BIG-IP 14.1 is supported with the Terraform AS3 provider? Isn't 12.1 supported in newer versions of AS3? TMOSv12.1 and above are supported on BIG-IP. Q: Does the repo mentioned in the webinar contain all the steps to build out the same demo? Yes - https://github.com/hashicorp/f5-terraform-consul-sd-webinar contains all the steps to build out the same demo Q: How does Consul support high availability? Consul is a highly available system. We use some gossip and consensus protocols under the hood to communicate with groups of massive number of nodes. It's not uncommon to see Consul data centers powering application servers in the thousands or more. Read the Consul architecture design document for more details. https://www.consul.io/docs/internals/architecture.html. Customers are recommended to deploy a three to five nodes which is a good balance balance of availability and performance. And if you have very large clusters pushing a lot of volume, you can add enterprise Consul feature,to enable non-voting members to horizontally scale out that Consul cluster for better performance. Read the deployment guide for more information. https://learn.hashicorp.com/consul/datacenter-deploy/deployment-guide Q: Does AS3 actually automatically added the nodes based on the response from consul? can we use the same worker but with a different discovery mechanism? At this point, only AS3 based JSON polling mechanism is supported as discovery mechanism. For other features or ideas, open an issue at https://github.com/F5Networks/f5-appsvcs-extension/issues Q: Does this solution support for other F5 modules like ASM, AFM etc.? Service discovery applies to the context of LTM. Through this service discovery solution, pool-members (nodes) are being changed on the BIG-IP. If the node uses a certain app-service configuration with ASM and AFM configured, the same will still apply to the newly nodes discovered. Q: Does Consul work on other cloud providers? Yes. Consul is a single binary that's compiled and go for multiple platforms, and can run in your own data centers, bare metal or VMs. They can run in container schedulers such as Kubernetes with a helm chart that help you automatically deploy it into that environment. And it's cloud-agnostic as well. It provide cloud auto-joining capabilities for nodes on AWS, Azure and GCP. Q: Are we able to tear down the resources that are built using AS3? Yes - Since AS3 uses declarative api, clean up can be done by just removing the stuff that is not needed and re-send declaration. E.g. Tenant1: {} - would delete the whole tenant 1. Q: How does Consul health check work? How fast the resource status gets updated if it is down Consul provides many types of health checks, including TCP, HTTP, L7, or custom scripts. The one that we showed in the demo was a simple TCP check, just for localhost 80 for NGINX server with an interval of 10 seconds. But these are configurable at the agent level and can be optimized for your workloads. Consul’s health check is very good at converging in the way that all the nodes can figure out that there's failures and communicate that. And generally even in a 10,000-node cluster, you're looking at convergence of under a second, and in general converge much faster. Read the guide for more information https://learn.hashicorp.com/consul/getting-started/services Q: Is the data synchronized automatically after pushing the script or we have to manually do it at the device level? Once we have the Service discovery configured on BIG-IP through the terraform provider as3 resource the data is synchronized automatically. Q: Can you show what the monitor configuration looks like on BIG-IP? Yes. (see video ). It is a simple out of the box HTTP monitor Q: F5 - Will the same solution also scale down in F5? I mean if we reduce the nginx instance, with AS3 reduce the nodes back to the reduced number? Yes that is correct it will scale down automatically once the node registered to the consul server also is scaled down.675Views0likes0CommentsAutomate Application Delivery with F5 and HashiCorp Terraform and Consul
Written by HashiCorp guest author Lance Larsen Today, more companies are adopting DevOps approach and agile methodologies to streamline and automate the application delivery process. HashiCorp enables cloud infrastructure automation, providing a suite of DevOps tools which enable consistent workflows to provision, secure, connect, and run any infrastructure for any application. Below are a few you may have heard of: Terraform Consul Vault Nomad In this article we will focus on HashiCorp Terraform and Consul, and how they accelerate application delivery by enabling network automation when used with F5 BIG-IP (BIG-IP).Modern tooling, hybrid cloud computing, and agile methodologies have our applications iterating at an ever increasing rate. The network, however, has largely lagged in the arena of infrastructure automation, and remains one of the hardest areas to unbottleneck. F5 and HashiCorp bring NetOps to your infrastructure, unleashing your developers to tackle the increasing demands and scale of modern applications with self-service and resilience for your network. Terraform allows us to treat the BIG-IP platform“as code”, so we can provision network infrastructure automatically when deploying new services.Add Consul into the mix, and we can leverage its service registry to catalog our services and enable BIG-IPs service discovery to update services in real time. As services scale up, down, or fail, BIG-IP will automatically update the configuration and route traffic to available and healthy servers. No manual updates, no downtime, good stuff! When you're done with this article you should have a basic understanding of how Consul can provide dynamic updates to BIG-IP, as well as how we can use Terraform for an “as-code” workflow. I’d encourage you to give this integration a try whether it be in your own datacenter or on the cloud - HashiCorp tools go everywhere! Note: This article uses sample IPs from my demo sandbox. Make sure to use IPs from your environment where appropriate. What is Consul? Consul is a service networking solution to connect and secure services across runtime platforms. We will be looking at Consul through the lens of its service discovery capabilities for this integration, but it’s also a fully fledged service mesh, as well as a dynamic configuration store. Head over to the HashiCorp learn portal for Consul if you want to learn more about these other use cases. The architecture is a distributed, highly available system. Nodes that provide services to Consul run a Consul agent. A node could be a physical server, VM, or container.The agent is responsible for health checking the service it runs as well as the node itself. Agents report this information to the Consul servers, where we have a view of services running in the catalog. Agents are mostly stateless and talk to one or more Consul servers. The consul servers are where data is stored and replicated. A cluster of Consul servers is recommended to balance availability and performance. A cluster of consul servers usually serve a low latency network, but can be joined to other clusters across a WAN for multi-datacenter capability. Let’s look at a simple health check for a Nginx web server. We’d typically run an agent in client mode on the web server node. Below is the check definition in json for that agent. { "service": { "name": "nginx", "port": 80, "checks": [ { "id": "nginx", "name": "nginx TCP Check", "tcp": "localhost:80", "interval": "5s", "timeout": "3s" } ] } } We can see we’ve got a simple TCP check on port 80 for a service we’ve identified as Nginx. If that web server was healthy, the Consul servers would reflect that in the catalog. The above example is from a simple Consul datacenter that looks like this. $ consul members Node Address Status Type Build Protocol DC Segment consul 10.0.0.100:8301 alive server 1.5.3 2 dc1 <all> nginx 10.0.0.109:8301 alive client 1.5.3 2 dc1 <default> BIG-IP has an AS3 extension for Consul that allows it to query Consul’s catalog for healthy services and update it’s member pools. This is powerful because virtual servers can be declared ahead of an application deployment, and we do not need to provide a static set of IPs that may be ephemeral or become unhealthy over time. No more waiting, ticket queues, and downtime. More on this AS3 functionality later. Now, we’ll explore a little more below on how we can take this construct and apply it “as code”. What about Terraform? Terraform is an extremely popular tool for managing infrastructure. We can define it “as code” to manage the full lifecycle. Predictable changes and a consistent repeatable workflow help you avoid mistakes and save time. The Terraform ecosystem has over 25,000 commits, more than 1000 modules, and over 200 providers. F5 has excellent support for Terraform, and BIG-IP is no exception. Remember that AS3 support for Consul we discussed earlier? Let’s take a look at an AS3 declaration for Consul with service discovery enabled. AS3 is declarative just like Terraform, and we can infer quite a bit from its definition. AS3 allows us to tell BIG-IP what we want it to look like, and it will figure out the best way to do it for us. { "class": "ADC", "schemaVersion": "3.7.0", "id": "Consul_SD", "controls": { "class": "Controls", "trace": true, "logLevel": "debug" }, "Consul_SD": { "class": "Tenant", "Nginx": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualPort": 8080, "virtualAddresses": [ "10.0.0.200" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 80, "addressDiscovery": "consul", "updateInterval": 15, "uri": "http://10.0.0.100:8500/v1/catalog/service/nginx" } ] } } } } We see this declaration creates a partition named “Consul_SD”. In that partition we have a virtual server named “serviceMain”, and its pool members will be queried from Consul’s catalog using the List Nodes for Service API. The IP addresses, the virtual server and Consul endpoint, will be specific to your environment.I’ve chosen to compliment Consul’s health checking with some additional monitoring from F5 in this example that can be seen in the pool monitor. Now that we’ve learned a little bit about Consul and Terraform, let’s use them together for an end-to-end solution with BIG-IP. Putting it all together This section assumes you have an existing BIG-IP instance, and a Consul datacenter with a registered service. I use Nginx in this example. The HashiCorp getting started with Consul track can help you spin up a healthy Consul datacenter with a sample service. Let’s revisit our AS3 declaration from earlier, and apply it with Terraform. You can check out support for the full provider here. Below is our simple Terraform file. The “nginx.json” contains the declaration from above. provider "bigip" { address = "${var.address}" username = "${var.username}" password = "${var.password}" } resource "bigip_as3" "nginx" { as3_json = "${file("nginx.json")}" tenant_name = "consul_sd" } If you are looking for a more secure way to store sensitive material, such as your BIG-IP provider credentials, you can check out Terraform Enterprise. We can run a Terraform plan and validate our AS3 declaration before we apply it. $ terraform plan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # bigip_as3.nginx will be created + resource "bigip_as3" "nginx" { + as3_json = jsonencode( { + Consul_SD = { + Nginx = { + class = "Application" + serviceMain = { + class = "Service_HTTP" + pool = "web_pool" + virtualAddresses = [ + "10.0.0.200", ] + virtualPort = 8080 } + template = "http" + web_pool = { + class = "Pool" + members = [ + { + addressDiscovery = "consul" + servicePort = 80 + updateInterval = 5 + uri = "http://10.0.0.100:8500/v1/catalog/service/nginx" }, ] + monitors = [ + "http", ] } } + class = "Tenant" } + class = "ADC" + controls = { + class = "Controls" + logLevel = "debug" + trace = true } + id = "Consul_SD" + schemaVersion = "3.7.0" } ) + id = (known after apply) + tenant_name = "consul_sd" } Plan: 1 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run. That output looks good. Let’s go ahead and apply it to our BIG-IP. bigip_as3.nginx: Creating... bigip_as3.nginx: Still creating... [10s elapsed] bigip_as3.nginx: Still creating... [20s elapsed] bigip_as3.nginx: Still creating... [30s elapsed] bigip_as3.nginx: Creation complete after 35s [id=consul_sd] Apply complete! Resources: 1 added, 0 changed, 0 destroyed Now we can check the Consul server and see if we are getting requests. We can see log entries for the Nginx service coming from BIG-IP below. consul monitor -log-level=debug 2019/09/17 03:42:36 [DEBUG] http: Request GET /v1/catalog/service/nginx (104.222µs) from=10.0.0.200:43664 2019/09/17 03:42:41 [DEBUG] http: Request GET /v1/catalog/service/nginx (115.571µs) from=10.0.0.200:44072 2019/09/17 03:42:46 [DEBUG] http: Request GET /v1/catalog/service/nginx (133.711µs) from=10.0.0.200:44452 2019/09/17 03:42:50 [DEBUG] http: Request GET /v1/catalog/service/nginx (110.125µs) from=10.0.0.200:44780 Any authenticated client could make the catalog request, so for our learning, we can use cURL to produce the same response. Notice the IP of the service we are interested in. We will see this IP reflected in BIG-IP for our pool member. $ curl http://10.0.0.100:8500/v1/catalog/service/nginx | jq [ { "ID": "1789c6d6-3ae6-c93b-9fb9-9e106b927b9c", "Node": "ip-10-0-0-109", "Address": "10.0.0.109", "Datacenter": "dc1", "TaggedAddresses": { "lan": "10.0.0.109", "wan": "10.0.0.109" }, "NodeMeta": { "consul-network-segment": "" }, "ServiceKind": "", "ServiceID": "nginx", "ServiceName": "nginx", "ServiceTags": [], "ServiceAddress": "", "ServiceWeights": { "Passing": 1, "Warning": 1 }, "ServiceMeta": {}, "ServicePort": 80, "ServiceEnableTagOverride": false, "ServiceProxyDestination": "", "ServiceProxy": {}, "ServiceConnect": {}, "CreateIndex": 9, "ModifyIndex": 9 } ] The network map of our BIG-IP instance should now reflect the dynamic pool. Last, we should be able to verify that our virtual service actually works. Let’s try it out with a simple cURL request. $ curl http://10.0.0.200:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> That’s it! Hello world from Nginx! You’ve successfully registered your first dynamic BIG-IP pool member with Consul, all codified with Terraform! Summary In this article we explored the power of service discovery with BIG-IP and Consul. We added Terraform to apply the workflow “as code” for an end-to-end solution. Check out the resources below to dive deeper into this integration, and stay tuned for more awesome integrations with F5 and Hashicorp! References F5 HashiCorp Terraform Consul Service Discovery Webinar HashiCorp Consul with F5 BIG-IP Learn Guide F5 BIG-IP Docs for Service Discovery Using Hashicorp Consul F5 provider for Terraform Composing AS3 Declarations2.4KViews3likes0Comments