NetOps
6 TopicsIntroducing AI Assistant for F5 Distributed Cloud, F5 NGINX One and BIG-IP
This article is an introduction to AI Assistant and shows how it improves SecOps and NetOps speed across all F5 platforms (Distributed Cloud, NGINX One and BIG-IP) , by solving the complexities around configuration, analytics, log interpretation and scripting.409Views3likes1CommentAutomate Application Delivery with F5 and HashiCorp Terraform and Consul
Written by HashiCorp guest author Lance Larsen Today, more companies are adopting DevOps approach and agile methodologies to streamline and automate the application delivery process. HashiCorp enables cloud infrastructure automation, providing a suite of DevOps tools which enable consistent workflows to provision, secure, connect, and run any infrastructure for any application. Below are a few you may have heard of: Terraform Consul Vault Nomad In this article we will focus on HashiCorp Terraform and Consul, and how they accelerate application delivery by enabling network automation when used with F5 BIG-IP (BIG-IP). Modern tooling, hybrid cloud computing, and agile methodologies have our applications iterating at an ever increasing rate. The network, however, has largely lagged in the arena of infrastructure automation, and remains one of the hardest areas to unbottleneck. F5 and HashiCorp bring NetOps to your infrastructure, unleashing your developers to tackle the increasing demands and scale of modern applications with self-service and resilience for your network. Terraform allows us to treat the BIG-IP platform “as code”, so we can provision network infrastructure automatically when deploying new services. Add Consul into the mix, and we can leverage its service registry to catalog our services and enable BIG-IPs service discovery to update services in real time. As services scale up, down, or fail, BIG-IP will automatically update the configuration and route traffic to available and healthy servers. No manual updates, no downtime, good stuff! When you're done with this article you should have a basic understanding of how Consul can provide dynamic updates to BIG-IP, as well as how we can use Terraform for an “as-code” workflow. I’d encourage you to give this integration a try whether it be in your own datacenter or on the cloud - HashiCorp tools go everywhere! Note: This article uses sample IPs from my demo sandbox. Make sure to use IPs from your environment where appropriate. What is Consul? Consul is a service networking solution to connect and secure services across runtime platforms. We will be looking at Consul through the lens of its service discovery capabilities for this integration, but it’s also a fully fledged service mesh, as well as a dynamic configuration store. Head over to the HashiCorp learn portal for Consul if you want to learn more about these other use cases. The architecture is a distributed, highly available system. Nodes that provide services to Consul run a Consul agent. A node could be a physical server, VM, or container. The agent is responsible for health checking the service it runs as well as the node itself. Agents report this information to the Consul servers, where we have a view of services running in the catalog. Agents are mostly stateless and talk to one or more Consul servers. The consul servers are where data is stored and replicated. A cluster of Consul servers is recommended to balance availability and performance. A cluster of consul servers usually serve a low latency network, but can be joined to other clusters across a WAN for multi-datacenter capability. Let’s look at a simple health check for a Nginx web server. We’d typically run an agent in client mode on the web server node. Below is the check definition in json for that agent. { "service": { "name": "nginx", "port": 80, "checks": [ { "id": "nginx", "name": "nginx TCP Check", "tcp": "localhost:80", "interval": "5s", "timeout": "3s" } ] } } We can see we’ve got a simple TCP check on port 80 for a service we’ve identified as Nginx. If that web server was healthy, the Consul servers would reflect that in the catalog. The above example is from a simple Consul datacenter that looks like this. $ consul members Node Address Status Type Build Protocol DC Segment consul 10.0.0.100:8301 alive server 1.5.3 2 dc1 <all> nginx 10.0.0.109:8301 alive client 1.5.3 2 dc1 <default> BIG-IP has an AS3 extension for Consul that allows it to query Consul’s catalog for healthy services and update it’s member pools. This is powerful because virtual servers can be declared ahead of an application deployment, and we do not need to provide a static set of IPs that may be ephemeral or become unhealthy over time. No more waiting, ticket queues, and downtime. More on this AS3 functionality later. Now, we’ll explore a little more below on how we can take this construct and apply it “as code”. What about Terraform? Terraform is an extremely popular tool for managing infrastructure. We can define it “as code” to manage the full lifecycle. Predictable changes and a consistent repeatable workflow help you avoid mistakes and save time. The Terraform ecosystem has over 25,000 commits, more than 1000 modules, and over 200 providers. F5 has excellent support for Terraform, and BIG-IP is no exception. Remember that AS3 support for Consul we discussed earlier? Let’s take a look at an AS3 declaration for Consul with service discovery enabled. AS3 is declarative just like Terraform, and we can infer quite a bit from its definition. AS3 allows us to tell BIG-IP what we want it to look like, and it will figure out the best way to do it for us. { "class": "ADC", "schemaVersion": "3.7.0", "id": "Consul_SD", "controls": { "class": "Controls", "trace": true, "logLevel": "debug" }, "Consul_SD": { "class": "Tenant", "Nginx": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualPort": 8080, "virtualAddresses": [ "10.0.0.200" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 80, "addressDiscovery": "consul", "updateInterval": 15, "uri": "http://10.0.0.100:8500/v1/catalog/service/nginx" } ] } } } } We see this declaration creates a partition named “Consul_SD”. In that partition we have a virtual server named “serviceMain”, and its pool members will be queried from Consul’s catalog using the List Nodes for Service API. The IP addresses, the virtual server and Consul endpoint, will be specific to your environment. I’ve chosen to compliment Consul’s health checking with some additional monitoring from F5 in this example that can be seen in the pool monitor. Now that we’ve learned a little bit about Consul and Terraform, let’s use them together for an end-to-end solution with BIG-IP. Putting it all together This section assumes you have an existing BIG-IP instance, and a Consul datacenter with a registered service. I use Nginx in this example. The HashiCorp getting started with Consul track can help you spin up a healthy Consul datacenter with a sample service. Let’s revisit our AS3 declaration from earlier, and apply it with Terraform. You can check out support for the full provider here. Below is our simple Terraform file. The “nginx.json” contains the declaration from above. provider "bigip" { address = "${var.address}" username = "${var.username}" password = "${var.password}" } resource "bigip_as3" "nginx" { as3_json = "${file("nginx.json")}" tenant_name = "consul_sd" } If you are looking for a more secure way to store sensitive material, such as your BIG-IP provider credentials, you can check out Terraform Enterprise. We can run a Terraform plan and validate our AS3 declaration before we apply it. $ terraform plan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # bigip_as3.nginx will be created + resource "bigip_as3" "nginx" { + as3_json = jsonencode( { + Consul_SD = { + Nginx = { + class = "Application" + serviceMain = { + class = "Service_HTTP" + pool = "web_pool" + virtualAddresses = [ + "10.0.0.200", ] + virtualPort = 8080 } + template = "http" + web_pool = { + class = "Pool" + members = [ + { + addressDiscovery = "consul" + servicePort = 80 + updateInterval = 5 + uri = "http://10.0.0.100:8500/v1/catalog/service/nginx" }, ] + monitors = [ + "http", ] } } + class = "Tenant" } + class = "ADC" + controls = { + class = "Controls" + logLevel = "debug" + trace = true } + id = "Consul_SD" + schemaVersion = "3.7.0" } ) + id = (known after apply) + tenant_name = "consul_sd" } Plan: 1 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run. That output looks good. Let’s go ahead and apply it to our BIG-IP. bigip_as3.nginx: Creating... bigip_as3.nginx: Still creating... [10s elapsed] bigip_as3.nginx: Still creating... [20s elapsed] bigip_as3.nginx: Still creating... [30s elapsed] bigip_as3.nginx: Creation complete after 35s [id=consul_sd] Apply complete! Resources: 1 added, 0 changed, 0 destroyed Now we can check the Consul server and see if we are getting requests. We can see log entries for the Nginx service coming from BIG-IP below. consul monitor -log-level=debug 2019/09/17 03:42:36 [DEBUG] http: Request GET /v1/catalog/service/nginx (104.222µs) from=10.0.0.200:43664 2019/09/17 03:42:41 [DEBUG] http: Request GET /v1/catalog/service/nginx (115.571µs) from=10.0.0.200:44072 2019/09/17 03:42:46 [DEBUG] http: Request GET /v1/catalog/service/nginx (133.711µs) from=10.0.0.200:44452 2019/09/17 03:42:50 [DEBUG] http: Request GET /v1/catalog/service/nginx (110.125µs) from=10.0.0.200:44780 Any authenticated client could make the catalog request, so for our learning, we can use cURL to produce the same response. Notice the IP of the service we are interested in. We will see this IP reflected in BIG-IP for our pool member. $ curl http://10.0.0.100:8500/v1/catalog/service/nginx | jq [ { "ID": "1789c6d6-3ae6-c93b-9fb9-9e106b927b9c", "Node": "ip-10-0-0-109", "Address": "10.0.0.109", "Datacenter": "dc1", "TaggedAddresses": { "lan": "10.0.0.109", "wan": "10.0.0.109" }, "NodeMeta": { "consul-network-segment": "" }, "ServiceKind": "", "ServiceID": "nginx", "ServiceName": "nginx", "ServiceTags": [], "ServiceAddress": "", "ServiceWeights": { "Passing": 1, "Warning": 1 }, "ServiceMeta": {}, "ServicePort": 80, "ServiceEnableTagOverride": false, "ServiceProxyDestination": "", "ServiceProxy": {}, "ServiceConnect": {}, "CreateIndex": 9, "ModifyIndex": 9 } ] The network map of our BIG-IP instance should now reflect the dynamic pool. Last, we should be able to verify that our virtual service actually works. Let’s try it out with a simple cURL request. $ curl http://10.0.0.200:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> That’s it! Hello world from Nginx! You’ve successfully registered your first dynamic BIG-IP pool member with Consul, all codified with Terraform! Summary In this article we explored the power of service discovery with BIG-IP and Consul. We added Terraform to apply the workflow “as code” for an end-to-end solution. Check out the resources below to dive deeper into this integration, and stay tuned for more awesome integrations with F5 and Hashicorp! References F5 HashiCorp Terraform Consul Service Discovery Webinar HashiCorp Consul with F5 BIG-IP Learn Guide F5 BIG-IP Docs for Service Discovery Using Hashicorp Consul F5 provider for Terraform Composing AS3 Declarations2.4KViews3likes0CommentsSimplifying Application Health Monitoring with F5 BIG-IP
A simple agreement between BIG-IP administrators and application owners can foster smooth collaboration between teams. Application owners define their own simple or complex health monitors and agree to expose a conventional /health endpoint. When a /health endpoint responds with an HTTP 200 request, BIG-IP assumes the application is healthy based on the application owners' own criteria. The Challenge of Health Monitoring in Modern Environments F5 BIG-IP administrators in Network Operations (NetOps) teams often work with application teams because the BIG-IP acts as a full proxy, providing services like: TLS termination Load balancing Health monitoring Health checks are crucial for effective load balancing. The BIG-IP uses them to determine where to send traffic among back-end application servers. However, health monitoring frequently causes friction between teams. Problems with the Traditional Approach Traditionally, BIG-IP administrators create and maintain health monitors ranging from simple ICMP pings to complex monitors that: Simulate user transactions Verify HTTP response codes Validate payload contents Track application dependencies This leads to several issues: Knowledge Gap: NetOps may not fully grasp each application's intricacies. Change Management Overhead: Application updates require retesting monitors, causing delays. Production Risk: Monitors can break after application changes, incorrectly marking services as up/down. Team Friction: Troubleshooting failed health checks involves tedious back-and-forth between teams. A Cloud-Native Solution The cloud-native and microservices communities have patterns that elegantly solve these problems. One widely used pattern is the [health endpoint], which adapts well to BIG-IP environments. The /health Endpoint Convention Cloud-native applications commonly expose dedicated health endpoints like /health, /healthy, or /ready. These return standard status codes reflecting the application's state. The /health endpoint provides a clear contract between NetOps and application teams for BIG-IP integration. Implementing the Contract This approach establishes a simple agreement: Application Team Responsibilities: Implement /health to return HTTP 200 when the application is ready for traffic Define "healthy" based on application needs (database connectivity, dependencies, etc.) Maintain the health check logic as the application changes BIG-IP Team Responsibilities: Configure an HTTP monitor targeting the /health endpoint Treat 200 as "healthy", anything else as "unhealthy" Benefits of This Approach Aligned Expertise: Application teams define health based on their knowledge. Less Friction: BIG-IP configuration stays stable as applications evolve. Better Reliability: Health checks reflect true application health, including dependencies. Easier Troubleshooting: The /health endpoint can return detailed diagnostic info, but this is ignored by the BIG-IP and used strictly for troubleshooting. Implementation Examples F5 BIG-IP Health Monitor Configuration ltm monitor http /Common/app-health-monitor { defaults-from /Common/http destination *:* interval 5 recv 200 recv-disable none send "GET /health HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" time-until-up 0 timeout 16 } Node.js Health Endpoint Implementation const express = require('express'); const app = express(); const port = 3000; app.get('/', (req, res) => { res.send('Application is running'); }); app.get('/health', async (req, res) => { try { const dbStatus = await checkDatabaseConnection(); const serviceStatus = await checkDependentServices(); if (dbStatus && serviceStatus) { return res.status(200).json({ status: 'healthy', database: 'connected', services: 'available', timestamp: new Date().toISOString() }); } res.status(503).json({ status: 'unhealthy', database: dbStatus ? 'connected' : 'disconnected', services: serviceStatus ? 'available' : 'unavailable', timestamp: new Date().toISOString() }); } catch (error) { res.status(500).json({ status: 'error', message: error.message, timestamp: new Date().toISOString() }); } }); async function checkDatabaseConnection() { // Check real database connection return true; } async function checkDependentServices() { // Check required service connections return true; } app.listen(port, () => { console.log(`Application listening at http://localhost:${port}`); }); Adopting this health check pattern can greatly reduce friction between NetOps and application teams while improving reliability. The simple contract of HTTP 200 for healthy provides the needed integration while letting each team focus on their expertise. For apps that can't implement a custom /health endpoint, BIG-IP admins can still use traditional ICMP or TCP port monitoring. However, these basic checks can't accurately reflect an app's true health and complex dependencies. This approach fosters collaboration and leverages the specialized knowledge of both network and application teams. The result is more reliable services and smoother operations.323Views1like0Comments