NetOps
4 TopicsAutomate Application Delivery with F5 and HashiCorp Terraform and Consul
Written by HashiCorp guest author Lance Larsen Today, more companies are adopting DevOps approach and agile methodologies to streamline and automate the application delivery process. HashiCorp enables cloud infrastructure automation, providing a suite of DevOps tools which enable consistent workflows to provision, secure, connect, and run any infrastructure for any application. Below are a few you may have heard of: Terraform Consul Vault Nomad In this article we will focus on HashiCorp Terraform and Consul, and how they accelerate application delivery by enabling network automation when used with F5 BIG-IP (BIG-IP).Modern tooling, hybrid cloud computing, and agile methodologies have our applications iterating at an ever increasing rate. The network, however, has largely lagged in the arena of infrastructure automation, and remains one of the hardest areas to unbottleneck. F5 and HashiCorp bring NetOps to your infrastructure, unleashing your developers to tackle the increasing demands and scale of modern applications with self-service and resilience for your network. Terraform allows us to treat the BIG-IP platform“as code”, so we can provision network infrastructure automatically when deploying new services.Add Consul into the mix, and we can leverage its service registry to catalog our services and enable BIG-IPs service discovery to update services in real time. As services scale up, down, or fail, BIG-IP will automatically update the configuration and route traffic to available and healthy servers. No manual updates, no downtime, good stuff! When you're done with this article you should have a basic understanding of how Consul can provide dynamic updates to BIG-IP, as well as how we can use Terraform for an “as-code” workflow. I’d encourage you to give this integration a try whether it be in your own datacenter or on the cloud - HashiCorp tools go everywhere! Note: This article uses sample IPs from my demo sandbox. Make sure to use IPs from your environment where appropriate. What is Consul? Consul is a service networking solution to connect and secure services across runtime platforms. We will be looking at Consul through the lens of its service discovery capabilities for this integration, but it’s also a fully fledged service mesh, as well as a dynamic configuration store. Head over to the HashiCorp learn portal for Consul if you want to learn more about these other use cases. The architecture is a distributed, highly available system. Nodes that provide services to Consul run a Consul agent. A node could be a physical server, VM, or container.The agent is responsible for health checking the service it runs as well as the node itself. Agents report this information to the Consul servers, where we have a view of services running in the catalog. Agents are mostly stateless and talk to one or more Consul servers. The consul servers are where data is stored and replicated. A cluster of Consul servers is recommended to balance availability and performance. A cluster of consul servers usually serve a low latency network, but can be joined to other clusters across a WAN for multi-datacenter capability. Let’s look at a simple health check for a Nginx web server. We’d typically run an agent in client mode on the web server node. Below is the check definition in json for that agent. { "service": { "name": "nginx", "port": 80, "checks": [ { "id": "nginx", "name": "nginx TCP Check", "tcp": "localhost:80", "interval": "5s", "timeout": "3s" } ] } } We can see we’ve got a simple TCP check on port 80 for a service we’ve identified as Nginx. If that web server was healthy, the Consul servers would reflect that in the catalog. The above example is from a simple Consul datacenter that looks like this. $ consul members Node Address Status Type Build Protocol DC Segment consul 10.0.0.100:8301 alive server 1.5.3 2 dc1 <all> nginx 10.0.0.109:8301 alive client 1.5.3 2 dc1 <default> BIG-IP has an AS3 extension for Consul that allows it to query Consul’s catalog for healthy services and update it’s member pools. This is powerful because virtual servers can be declared ahead of an application deployment, and we do not need to provide a static set of IPs that may be ephemeral or become unhealthy over time. No more waiting, ticket queues, and downtime. More on this AS3 functionality later. Now, we’ll explore a little more below on how we can take this construct and apply it “as code”. What about Terraform? Terraform is an extremely popular tool for managing infrastructure. We can define it “as code” to manage the full lifecycle. Predictable changes and a consistent repeatable workflow help you avoid mistakes and save time. The Terraform ecosystem has over 25,000 commits, more than 1000 modules, and over 200 providers. F5 has excellent support for Terraform, and BIG-IP is no exception. Remember that AS3 support for Consul we discussed earlier? Let’s take a look at an AS3 declaration for Consul with service discovery enabled. AS3 is declarative just like Terraform, and we can infer quite a bit from its definition. AS3 allows us to tell BIG-IP what we want it to look like, and it will figure out the best way to do it for us. { "class": "ADC", "schemaVersion": "3.7.0", "id": "Consul_SD", "controls": { "class": "Controls", "trace": true, "logLevel": "debug" }, "Consul_SD": { "class": "Tenant", "Nginx": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualPort": 8080, "virtualAddresses": [ "10.0.0.200" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 80, "addressDiscovery": "consul", "updateInterval": 15, "uri": "http://10.0.0.100:8500/v1/catalog/service/nginx" } ] } } } } We see this declaration creates a partition named “Consul_SD”. In that partition we have a virtual server named “serviceMain”, and its pool members will be queried from Consul’s catalog using the List Nodes for Service API. The IP addresses, the virtual server and Consul endpoint, will be specific to your environment.I’ve chosen to compliment Consul’s health checking with some additional monitoring from F5 in this example that can be seen in the pool monitor. Now that we’ve learned a little bit about Consul and Terraform, let’s use them together for an end-to-end solution with BIG-IP. Putting it all together This section assumes you have an existing BIG-IP instance, and a Consul datacenter with a registered service. I use Nginx in this example. The HashiCorp getting started with Consul track can help you spin up a healthy Consul datacenter with a sample service. Let’s revisit our AS3 declaration from earlier, and apply it with Terraform. You can check out support for the full provider here. Below is our simple Terraform file. The “nginx.json” contains the declaration from above. provider "bigip" { address = "${var.address}" username = "${var.username}" password = "${var.password}" } resource "bigip_as3" "nginx" { as3_json = "${file("nginx.json")}" tenant_name = "consul_sd" } If you are looking for a more secure way to store sensitive material, such as your BIG-IP provider credentials, you can check out Terraform Enterprise. We can run a Terraform plan and validate our AS3 declaration before we apply it. $ terraform plan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # bigip_as3.nginx will be created + resource "bigip_as3" "nginx" { + as3_json = jsonencode( { + Consul_SD = { + Nginx = { + class = "Application" + serviceMain = { + class = "Service_HTTP" + pool = "web_pool" + virtualAddresses = [ + "10.0.0.200", ] + virtualPort = 8080 } + template = "http" + web_pool = { + class = "Pool" + members = [ + { + addressDiscovery = "consul" + servicePort = 80 + updateInterval = 5 + uri = "http://10.0.0.100:8500/v1/catalog/service/nginx" }, ] + monitors = [ + "http", ] } } + class = "Tenant" } + class = "ADC" + controls = { + class = "Controls" + logLevel = "debug" + trace = true } + id = "Consul_SD" + schemaVersion = "3.7.0" } ) + id = (known after apply) + tenant_name = "consul_sd" } Plan: 1 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run. That output looks good. Let’s go ahead and apply it to our BIG-IP. bigip_as3.nginx: Creating... bigip_as3.nginx: Still creating... [10s elapsed] bigip_as3.nginx: Still creating... [20s elapsed] bigip_as3.nginx: Still creating... [30s elapsed] bigip_as3.nginx: Creation complete after 35s [id=consul_sd] Apply complete! Resources: 1 added, 0 changed, 0 destroyed Now we can check the Consul server and see if we are getting requests. We can see log entries for the Nginx service coming from BIG-IP below. consul monitor -log-level=debug 2019/09/17 03:42:36 [DEBUG] http: Request GET /v1/catalog/service/nginx (104.222µs) from=10.0.0.200:43664 2019/09/17 03:42:41 [DEBUG] http: Request GET /v1/catalog/service/nginx (115.571µs) from=10.0.0.200:44072 2019/09/17 03:42:46 [DEBUG] http: Request GET /v1/catalog/service/nginx (133.711µs) from=10.0.0.200:44452 2019/09/17 03:42:50 [DEBUG] http: Request GET /v1/catalog/service/nginx (110.125µs) from=10.0.0.200:44780 Any authenticated client could make the catalog request, so for our learning, we can use cURL to produce the same response. Notice the IP of the service we are interested in. We will see this IP reflected in BIG-IP for our pool member. $ curl http://10.0.0.100:8500/v1/catalog/service/nginx | jq [ { "ID": "1789c6d6-3ae6-c93b-9fb9-9e106b927b9c", "Node": "ip-10-0-0-109", "Address": "10.0.0.109", "Datacenter": "dc1", "TaggedAddresses": { "lan": "10.0.0.109", "wan": "10.0.0.109" }, "NodeMeta": { "consul-network-segment": "" }, "ServiceKind": "", "ServiceID": "nginx", "ServiceName": "nginx", "ServiceTags": [], "ServiceAddress": "", "ServiceWeights": { "Passing": 1, "Warning": 1 }, "ServiceMeta": {}, "ServicePort": 80, "ServiceEnableTagOverride": false, "ServiceProxyDestination": "", "ServiceProxy": {}, "ServiceConnect": {}, "CreateIndex": 9, "ModifyIndex": 9 } ] The network map of our BIG-IP instance should now reflect the dynamic pool. Last, we should be able to verify that our virtual service actually works. Let’s try it out with a simple cURL request. $ curl http://10.0.0.200:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> That’s it! Hello world from Nginx! You’ve successfully registered your first dynamic BIG-IP pool member with Consul, all codified with Terraform! Summary In this article we explored the power of service discovery with BIG-IP and Consul. We added Terraform to apply the workflow “as code” for an end-to-end solution. Check out the resources below to dive deeper into this integration, and stay tuned for more awesome integrations with F5 and Hashicorp! References F5 HashiCorp Terraform Consul Service Discovery Webinar HashiCorp Consul with F5 BIG-IP Learn Guide F5 BIG-IP Docs for Service Discovery Using Hashicorp Consul F5 provider for Terraform Composing AS3 Declarations2.4KViews3likes0CommentsNetOps Primer: What are Microservices?
Microservices are coming to a network near you. Forty-one percent (41%) of respondents to our State of Application Delivery 2018 survey told us their organizations were exploring microservices as a result of digital transformation initiatives. With them will come operational and architectural impacts, which makes them an important architectural evolution for NetOps to understand. Microservices: A Definition All too often microservices and containers are used interchangeably. It is not merely pedantry to correct this usage, as the former is an application architecture and the latter a delivery model. Containers are used to deliver a variety of infrastructure and operational services as well as microservices. They are prized for their portability; cloud agnosticism is a key characteristic of containers that makes multi-cloud an achievable goal. Nothing in the definition or guiding principles of microservices architectures makes mention of 'containers'. Containers are often used to deploy individual microservices because they are a good operational fit, but they are not required. Microservices is an architectural style of application design. It can be compared to client-server, three-tier web, and SOA. From a high-level perspective, much of application architecture is dedicated to determining where and how to execute logic and access data. For much of application architectural history, data access has driven the design of applications. Issues with access and consistency drive developers to design applications in such a way as to ensure optimal use of data stores and reliable consistency of that data across services. Business logic is distributed such that it is closest to that data, while the presentation layer is almost always distributed closest to the client. Business logic can be contained in a single "application", such as in traditional architectures, or it can be distributed. This is the case with SOA and increasingly with microservices and serverless computing. This division of business logic is the driving design principle for microservices. Microservices view an application as a collection of objects and functions that make up the "business logic". This logic enables us to 'checkout', 'track orders', 'manage profiles', and a hundred other application-specific tasks. The code that makes up 'checkout' and 'manage profile' can be grouped together neatly into a single microservice. Essentially, microservices architecture is a set of principles that, when applied to an application, determine how business logic is grouped and distributed across an 'application'. Microservices architecture aims to group that logic as tightly as it can, and in the smallest sustainable chunks possible. This aligns well with agile methodologies that demand rapid, frequent iterations over the code. By isolating chunks of business logic, each chunk can be iterated over in isolation to allow new features and functionality to be delivered in a rapid fashion. The size of the microservice is largely determined by the development organization's ability to maintain the code and speed of delivery. Too many microservices introduces complexity that slows down delivery. Thus, most organizations have settled on fewer microservices and moved from a purely functional approach to a more object-oriented strategy. Regardless, there are still more microservices that make up a modern application than there are in a traditional, three-tier web application. The Impact on NetOps So you might be thinking, so what? What does a NetOps care about how developers architect applications? The impact on NetOps is largely operational. That is, instead of deploying one application, you must now find time to deploy and operate all the application services needed to support many more applications. This imposes a burden on every operational group in IT that makes up the deployment pipeline as it is likely that no two applications share the same deployment schedule. This is true whether microservices are deployed in containers or not. Regardless of the deployment model - whether in individual VMs or in containers or even on bare-metal built servers - each microservice must be effectively treated as an individual application. There are also technical and architectural impacts on NetOps, largely due to reliance on API paths rather hosts to route requests to the appropriate service. This can mean insertion of new app routing services in the network architecture to support potentially multiple layers of HTTP-based routing. The simplicity or complexity of the routing environment is largely determined by how granular developers get during design and implementation. Highly granular, function-based microservices architectures can result in hundreds of tiny microservices. Less aggressive (and more reasonable in an enterprise setting) implementations will see smaller but still significant increases. Getting familiar with microservices will be a boon to NetOps and a necessary step toward supporting the next generation of applications built on this application architecture evolution.708Views0likes2CommentsNetOps Meets DevOps - The State of Network Automation Survey
We want to understand your company’s current application architectures and the adoption of continuous delivery and continuous deployment practices within your organization. Please answer some brief questions about: How important automation is to your application deployments Drivers for continuous delivery and continuous deployment (CD/CD) Current challenges and concerns with respect to network and security operations How your future initiatives are shaping your plans for network and security automation Usage of automation tools across public and private cloud Please note that your responses will be confidential and reported only in aggregate. As a thank you for participating, you will receive a copy of the final aggregate survey results and, a lucky participant will receive a $500 Amazon gift card. All information will remain confidential This survey is being administered by an independent research company on behalf of F5 and Red Hat. Your answers will be kept strictly confidential and your feedback will be combined with the feedback from all respondents worldwide. UPDATE: The report has now been finalized and can be found here: NetOps Meets DevOps - The State of Network Automation Many thanks from the DevCentral Team!229Views0likes0CommentsThe Three HTTP Routing Patterns You Should Know
HTTP is the de-facto application transport layer. The majority of applications and APIs today are delivered via HTTP, regardless of their content (HTML, JSON, XML). That means most of the scaling going on is focused on scaling HTTP-based apps. In fact, when I peek at our latest data from nearly 4 million virtual servers, 63% of them are HTTP/S. In MuleSoft’s latest Connectivity Benchmark, respondents reported an average of 1020 applications in their enterprise environments. Even if only 63% are HTTP/S, that’s still a lot of HTTP - with not a lot of IP space to give it. Most organizations aren’t lucky enough to own significant ranges of publicly accessible IP addresses. That’s one of the reasons why “virtual servers” exist in the realm of web serving. So that a single IP address can service multiple servers (or applications). Host-based (virtual) routing has been a go-to for the Internet for years. And though it’s the most commonly used (and best known) type of HTTP routing, there are actually three types of HTTP routing NetOps should get familiar with. That’s because increasingly the world of applications is clashing with that of the network, and many of the deployment patterns being used by DevOps rely on HTTP-based routing capabilities. 1. Host-based The old standard. Host-based routing is what enables virtual servers on web servers. It’s also used by application services like load balancing and ingress controllers to achieve the same thing. One IP address, many hosts. Host-based routing allows you to send a request for api.example.com and for web.example.com to the same endpoint with the certainty it will be delivered to the correct back-end application. 2. Path-based Increasingly common – particularly in the realm of scaling containers using ingress controllers – is path-based routing. Path-based routing requires visibility into the URI portion of an HTTP request. You can route based on the entire path (not advisable) or a portion of the path. For example, you could search for /getprofile in this path and route it to one application while routing all others to a different application. You can also search for /v1 in the path and use it to implement a type of API versioning. Because of the focus on API-only communication in modern apps (like mobile) and architectures (like microservices), the use of path-based routing is increasingly important to enabling not only scale but simple delivery. When everything you need to know is found in the URI (in the path), it becomes imperative that you can dissect that path into its composite pieces and make decisions on where that request needs to go. 3. Header-based Header-based is a broad category that includes some familiar routing patterns such as persistence (sticky sessions). Header-based routing simply means that you use an arbitrary HTTP header as the basis for determining how to route a request. This might be a standard header, like content-type or cookie, or it might be a custom header, like x-custom-header-for-my-app. The use of HTTP headers to route requests is a long-standing tradition of sorts. The concept of sticky sessions (persistence) is based on the use of Cookies to aid in scaling stateful applications. It’s also instrumental in maintaining secure sessions (SSL/TLS). Note that Header-based is usually separated from host-based routing even though host is technically one of the many HTTP headers. Many systems were able to perform host-based routing but not general header-based routing, though today it is hard to find a load balancer/proxy that cannot support routing based on any header. Despite the apparent simplicity of these routing patterns, their importance should not be underestimated. API Gateways, web application firewalls (particularly inspection capabilities), ingress controllers, and a robust set of other application services rely on being able to route based on this information.6.2KViews0likes0Comments