dccloud17
25 TopicsThe Hitchhiker’s Guide to BIG-IP in Azure – “High Availability”
Hello and welcome to the third installment of “The Hitchhiker’s Guide to BIG-IP in Azure”. In previous posts, (I assume you read and memorized them… right?), we looked at the Azure infrastructure and the many options one has for deploying a BIG-IP into Azure. Here’s some links to the previous posts in case you missed them; well worth the read. Let us now turn our attention to the next topic in our journey: high-availability. The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” A key to ensuring high-availability of your Azure-hosted application, (or any application for that matter), is making sure to eliminate any potential single points of failure. To that end, load balancing is typically used as the primary means to ensure a copy of the application is always reachable. This is one of the most common reasons for utilizing a BIG-IP. Those of us who have deployed the F5 BIG-IP in a traditional data center environment know that ensuring high-availability, (HA), is more than just having multiple pool members behind a single BIG-IP; it’s equally as important to ensure the BIG-IP does not represent a single point of failure. The same holds true for Azure deployments; eliminate single points of failure. While the theory is the same for both on-premises and cloud-based deployments, the process of deploying and configuring for HA is not. As you might recall from our first installment, due to infrastructure limitations common across public clouds, the traditional method of deploying the BIG-IP in an active/standby pair is not feasible. That’s ok; no need to search the universe. There’s an answer; and no, it’s not 42. - Sorry couldn’t help myself Active / Active Deployment “Say, since I have to have at least 2 BIG-IPs for HA, why wouldn’t I want to use both?” Well, for most cases, you probably would want to and can. Since the BIG-IP is basically another virtual machine, we can make use of various native Azure resources, (refer to Figure 1), to provide high availability. Availability Sets The BIG-IPs can be, (should be) placed in an availability set. BIG-IPs are located in separate fault and update domains ensuring local hardware fault tolerance. Azure Load Balancers The BIG-IP can be deployed behind and Azure load balancer to provide Active / Active high availability. It may seem strange to “load balance” a load balancer. However, it’s important to remember, the BIG-IP provides a variety of application services including WAF, Federation, SSO, SSL Offload, etc. This is in addition to traffic optimization and comprehensive load balancing. Azure Autoscale For increased flexibility with respect to performance, capacity, and availability BIG-IPs can be deployed into scale sets, (refer to Figure 2 below). By combining multiple public facing IP endpoints, interfaces, horizontal and vertical auto scaling it’s possible to efficiently run multiple optimized, secure, and highly available applications. Note: Currently, multiple BIG-IP instance deployments, (including scale sets), must be deployed programmatically, typically via an ARM template. Here’s the good news; F5 has several ARM templates available on GitHub at https://github.com/F5Networks/f5-azure-arm-templates. Active / Standby Deployment with Public Endpoint Migration As I just mentioned, in most cases an active/active deployment is preferred. However, there may be stateful applications that still require load balancing mechanisms beyond an Azure load balancer’s capability. Thanks to the guys in product development, there’s an experimental ARM template available on GitHub for deploying a pair of Active/Standby BIG-IPs. This deployment option mimics F5’s traditional on-premises model, (thanks again Mike Shimkus). Global High Availability With data centers literally located all over the world, it’s possible to place your application close to the end user wherever they might be located. By incorporating BIG-IP DNS, (formerly GTM), applications can be deployed globally for performance as well as availability. Users can be directed to appropriate application instance. In the event an application becomes unavailable or overloaded, users will be automatically redirected to a secondary subscription or region. This can be implemented down to a specific virtual server. All other unaffected traffic will still be sent to the desired region. Well friends, that’s it for this week. Stay tuned for next week when we take a look at life cycle management. Or would you prefer some Vogon poetry? Additional Links: The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” BIG-IP in Azure? Are You Serious? F5 Networks GitHub Overview of Autoscale in Microsoft Azure Virtual Machines, Cloud Services, and Web Apps Understand the structure and syntax of Azure Resource Manager templates Deploying BIG-IP Virtual Edition in Azure4.4KViews0likes5CommentsThe Hitchhiker’s Guide to BIG-IP in Azure
“Happy Cloud Month everybody!” In honor of F5 DevCentral’s first official, (at least that’s what they tell me), cloud month we thought it would be a great time to circle back with you our dear readers and provide an overview of the BIG-IP in Azure. So with that said…. Welcome to the first installment of our new series, ‘The Hitchhiker’s guide to BIG-IP in Azure’. Okay, so maybe not the most original of titles, (sorry Douglas Adams), but hopefully it will give me a chance to throw in an obscure movie reference or two. Over the next four weeks we will take a closer look at everything around Azure and BIG-IP from architecture considerations to deployment scenarios. We may even throw in a little life cycle management for good measure. Alright fellow interstellar travelers, let’s grab our trusty towels and boogey. Azure Architectural Considerations Before taking a look at deploying the F5 BIG-IP into Azure, (come back next week) we should review a few key characteristics that differentiates an Azure virtual network environment from a “traditional” on-premises network infrastructure. Limited Visibility In a traditional networking environment, the entire network stack, including OSI layers 2/3 is exposed to attached devices. Having the ability to interact with the network at these lower layers, (specifically the Data Link layer – aka L2) is key requirement for some of the BIG-IP’s core functionality; most notably with respect to high availability. In the public cloud, (including Microsoft Azure, AWS, and Google Cloud), the lower networking layers are obfuscated and devices such as the BIG-IP must find new ways of delivering the same functionality. For example, the BIG-IP traditionally relies upon floating MAC and IP addresses, (requiring L2/3 visibility and control), to handle graceful failover of services to a standby BIG-IP. Routing Routing within an Azure virtual network is handled automatically by Azure IaaS through the use of pre-defined system routes. By default, all subnets within an Azure virtual network have open connectivity, (see fig #1 below). Subscribers can also create user-defined routes allowing for greater control and flexibility; more on that later. Hybrid Scenarios In addition to internal network routing, Azure supports connectivity across virtual networks and external networks via either native technologies, (site-to-site VPN, point-to-site VPN or ExpressRoute), and/or third-party solutions such as the BIG-IP. BIG-IP in Azure IaaS How to architect the BIG-IP into your Azure infrastructure will depend on a number of factors including, but not limited to number and types of services provided, availability requirements, and virtual network design. Single-NIC The BIG-IP platform was first made available for Azure deployments back in October 2015. At the time, virtual machine deployments of this type where limited to a single network interface with one external facing endpoint. Though perhaps not the ideal configuration for a network appliance, the single-NIC design, (see fig #3 below) does allow for the injection of F5 BIG-IP services such as WAF, traffic optimization, SSL offload, etc. What’s more, it’s currently the only option for deploying a BIG-IP directly out of the Azure marketplace. In the above example diagram the single-NIC BIG-IP deployment provides: · Application load balancing; · Web application firewall, (WAF); · Secure remote access; · Global load balancing; and · Virtual network traffic management and control. While a viable alternative to a more traditional multi-homed configuration, this does mean that all traffic, (both management and data) utilize the same interface and as such will impact overall throughput available to the underlying application(s). Multi-NIC Over the past several quarters, Microsoft has introduced several enhancements to the Azure infrastructure; most notably support for multiple interfaces per virtual machine and multiple public and private IP addresses per network interface. For me, that’s as cool as the Infinite Improbability Drive! Ok, maybe I’m overstating the importance; it’s still pretty cool. Checkout the links and you be the judge. Regardless of whether I’m overstating the “coolness” factor, this new functionality enables the BIG-IP to be deployed and configured into a more traditional multi-armed configuration, (refer to fig #4). Additionally, multiple applications can be deployed behind a single BIG-IP, (or HA pair) instance. User-defined Routing In addition to application delivery, the BIG-IP may also be configured to provide traffic management within an Azure network infrastructure. For example, as previously shown in fig #1, the BIG-IP configured with Advanced Firewall Manager, (AFM) can be situated as a single point of control within the virtual network. User-defined routing is then configured to route intranet through the BIG-IP and AFM is used to control traffic flow. IPsec VPN / Remote Access The BIG-IP with Local Traffic Manager, (LTM) may be deployed on-premises, in an Azure environment, or at a colocation facility to provide hybrid connectivity and remote access. Check out some our previous posts for more information on this type of deployment. That’s it for now. Stay tuned for next week when we will take a closer look at deployment options for the BIG-IP in Azure. Additional Links: BIG-IP in Azure? Are You Serious? User-defined routes and IP forwarding Windows Azure Virtual Networks BIG-IP to Azure Dynamic IPsec Tunneling Connecting to Windows Azure with the BIG-IP About VPN devices for site-to-site virtual network connections Configuring IPsec between a BIG-IP system and a third-party device2.7KViews0likes0CommentsThe Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios”
“A towel, [The Hitchhiker's Guide to the Galaxy] says, is about the most massively useful thing an interstellar hitchhiker can have. Partly it has great practical value. You can wrap it around you for warmth as you bound across the cold moons of Jaglan Beta; you can lie on it on the brilliant marble-sanded beaches of Santraginus V, inhaling the heady sea vapors; you can sleep under it beneath the stars which shine so redly on the desert world of Kakrafoon; use it to sail a miniraft down the slow heavy River Moth; wet it for use in hand-to-hand-combat; wrap it round your head to ward off noxious fumes or avoid the gaze of the Ravenous Bugblatter Beast of Traal (such a mind-boggingly stupid animal, it assumes that if you can't see it, it can't see you); you can wave your towel in emergencies as a distress signal, and of course dry yourself off with it if it still seems to be clean enough.” ― Douglas Adams, The Hitchhiker's Guide to the Galaxy Ok, so maybe you can’t use an F5 BIG-IP as a sail for your miniraft but have you ever tried to stopping a layer 7 DDOS attack with a towel? Can a towel secure your application while providing high availability and scalability? What’s more, while you may not be able to wrap a BIG-IP around you, I bet a server rack full of BIG-IP VIPRIONs would keep you a heck of a lot warmer than some old towel. I’m just saying. Hello and welcome to part 2 of our 4-part series, ‘The Hitchhiker’s Guide to BIG-IP in Azure’. In this installment we’ll take a closer look at some ways in which to deploy the BIG-IP into the Azure infrastructure. With regard to Azure, the BIG-IP is basically another compute workload and as such can be deployed into multiple avenues. Azure Marketplace Deploying a BIG-IP out of the Azure Marketplace is by far the easiest method. As the screenshot illustrates below, there are several options to choose from including hourly billing and BYOL. Additionally, there solution specific offerings as well for WAF, (Web Application Firewall) and Office 365 federation with more to follow. These offers will deploy and fully configure solution-specific BIG-IP(s) without any further interaction required. it is important to note that while this is a very easy deployment method, there are some limitations. For example, the marketplace allows only for single-NIC deployments. Additionally, aside from basic initial configuration, the BIG-IP will require additional manual configuration, (i.e. licensing, provisioning, etc.). Once you have selected an appropriate offer, it is simply a matter of providing the required parameters, (refer to below) and accepting the EULA. The BIG-IP and relevant resources, (virtual network, NIC, storage, etc.) are deployed and ready to use within approximately 20 minutes. ARM template Deploying a BIG-IP by way of an Azure Resource Manager, (ARM) template allows for the greatest deployment flexibility. In a nutshell, an ARM template is JSON file the defines Azure resources, (including virtual machines like the BIG-IP VE) to deploy into an Azure infrastructure, (see example below). The template can be used to deploy resources in a repeatable and consistent manner. Additionally, you get the following benefits: Highly customizable and automated post deployment configurations by way of Azure custom script extensions; Multiple BIG-IP instances; Multiple interfaces; and Multiple public facing IP addresses The real power of ARM templates is how they enable quick, consistent, and repeatable automated deployments from a variety of sources such as the Azure portal, PowerShell, Azure CLI, or any number of orchestration tools such as Ansible, Puppet, and Chef. To help, F5 Networks has published several ARM templates via GitHub. These templates provide an excellent starting point for customers requiring more complex or custom deployments. These templates can be easily modified to allow for very specific, yet repeatable deployment scenarios. CLI & PowerShell & Azure Portal As I mentioned above there are a number of options available for deploying resources via an ARM template into Azure. Two common methods are the Microsoft Azure Command-Line Utility, and Azure PowerShell. The inline links at left will direct you to official Azure guidance and additional resources. Thanks to the efforts of some super talented engineers, (Michael Shimkus and James Sevedge), there are deployment scripts for both options available out on GitHub at https://github.com/F5Networks/f5-azure-arm-templates to compliment the above mentioned ARM templates. Pretty cool! Perhaps the easiest way to deploy an ARM template is through the Azure portal. It’s a fairly simply matter of obtaining a copy of an ARM template and uploading into the portal via a new custom deployment. Here is a quick overview of the process. The absolutely guaranteed * easiest way to deploy a BIG-IP ARM template is straight out of the F5 Networks GitHub repository. Just look for and click on the desired quick deployment button. Ok. That’s it for now. Stay tuned for next week’s installment, The Hitchhiker’s Guide to BIG-IP in Azure – “High Availability”. * Note: The author by no means actually guarantees much of anything; therefore any claims will be arbitrated on Vogsphere by means of a Vogon Poetry slam. Yeah!!! I got one more movie reference in just under the wire. Additional Links: The Hitchhiker’s Guide to BIG-IP in Azure BIG-IP in Azure? Are You Serious? F5 Networks GitHub Understand the structure and syntax of Azure Resource Manager templates Deploying BIG-IP Virtual Edition in Azure2KViews0likes0CommentsThe Service Model for Cloud/Automated Systems Architectures
Recap Previous Article: Cloud/Automated Systems need an Architecture In the last article we spent some time laying the foundation of an Architecture for Cloud and Automated Systems. In that article we discussed that our Architecture consisted of three Models that form our triangle: Additionally, to provide context, we covered the following Architectural Truths: Enable a DevOps Methodology and Toolchain Lower or eliminate Domain Specific Knowledge Leverage Infrastructure-as-Code Don’t sacrifice Functionality to Automate Provide a Predictable Cost Model Enable delivery of YOUR innovation to the market As we continue on our journey through this series of articles it's important to remember our Architectural Truths. These items are the unbreakable laws of our Architecture. As we develop our models we will take these Truths as base assumptions. What's in a Model? So, what are these models? We referenced them continually throughout the last article but now it's time to define them. Our Architecture consists of three models: Service Model: Maps business requirements to automatable technology use cases. In the case of F5 products the Service Model defines the Layer 4-7 services delivered by the F5 platform. Deployment Model: Implements Continuous Deployment of the Service and Operational Models into an Environment. Operational Model: Provides stable and predictable workflows for changes to production environments, data and telemetry gathering and troubleshooting. Within these models we further define Truths and Attributes. Truths, as implied, should always be adhered to within a given Model. Attributes, however, are more flexible because the choice to implement an Attribute is made while implementinga specific Expression of the Model into an Environment. In the first article in this series we discussed how a Model evolves over time: We can now expand on this idea to involve iteration: As you can see in the image above, we implement Continuous Improvement on each Iteration of the Model. Generally, in each iteration we can implement new Attributes or improve on existing ones. Service Model Now that we understand the concept of a Model and how that Model fits into our larger Architecture, lets dive into the details of the Service Model. As discussed above, each Model has a set of Truths and Attributes. First, we'll cover our Truths, then the Attributes, and finally we will provide an F5 specific expression of each Attribute. This pattern will be used for all of our Models. Service Model Truths Lets take a look at the Truths for the Service Model: Provide Appropriately Abstracted Services Be Declarative, Composable and Mutable Be Consumable in an On-Demand manner Be Loosely Coupled to the Deployment and Operational Model Provide Appropriately Abstracted Services As discussed extensively, we must utilize abstraction to lower or eliminate the need for Domain Specific Knowledge and build Declarative Interfaces. To achieve this each business requirement should be mapped to a use case that a specific set of technology can deliver. After that mapping is identified it is essential that the perspective of the consumer of that service is used to establish the baseline requirements for deploying a Service. For example, lets refer back to Jammin', our Jam sandwich restaurant. The Drive-Thru lane is a Declarative Interface, the Menu itself is a Service Model (or Service Catalog), and aMenu Item is a Service. When we establish the baseline Domain Specific Knowledge required to interact with our Service Model we assume that our Consumer knows how to use a Drive-Thru and also understands the basic differences between our Menu Items (strawberry vs. grape jam). From the perspective of the Consumer having two menu items for these types of sandwiches is the easiest interface. What happens to deliver the sandwiches is irrelevant to the Consumer. The only thing that matters is that they received the easiest possible interface. Be Declarative, Composable and Mutable We've already discussed what Declarative means so lets focus on the new items introduced here: Composable: The ability to assemble features and use cases in different Service combinations that meet business requirements. Mutable: The ability to change or Mutate service offerings as needed to meet business and operational requirements. Additionally, services themselves should cleary define how a specific Service can be mutated by the Operational Model. Let's imagine that Jammin' is selling sandwiches left and right... Due to our massive success we've realized that we could delight even more customers by adding some more items to our menu. Because we implemented the F5 Architecture for Automated Systems we're able to do this easily. The new menu includes the ability to choose Jelly instead of Jam (yes, they are different... look it up!) and a brand new sandwich innovation we are calling "Peanut Butter and Jammin'". Luckily, we thought ahead a little bit when designing our kitchen and included stations to store these new ingredients. This allows our service offerings to be Composable. We can combine ingredients in different ways to meet the needs of our consumers. All we need to do is add some steps in our Imperative Process to deliver these new products. Now that we can make the sandwich, how do we let our customers know about our new offerings? This is where the Mutable aspect of service offerings comes in. When we built our restaurant we added digital menus to our Drive-Thru lanes. This allows Jammin' (the undisputed leader in jam based sandwich innovation) to quickly change (Mutate) their menus to include the newest offerings as soon as they are available. The second type of Mutability, Service Mutability is not easily explained using our jam sandwich analogy. Instead we will be use an F5 specific example in this case. Clearly defining how a Service itself is changed is critical later on when we implement an Operational Model. An example is how a Service with a Layer 7 Web Application Firewall policy will apply changes to that policy. Does the Service redeploy a standardized base policy? Does it allow the policy to change after deployment and preserve those changes? When working with L7 services it's important that Mutation of the L7 policy component is accounted for and the intended behavior is clearly defined by the Service. Be Consumable in an On-Demand manner We've discussed quite a bit about Declarative Interfaces, however, one aspect of that model has not been addressed yet; the ability to deliver a service on a as-needed or on-demand manner. This truth is usually boundwithin defined limits.The key idea here is that a Service definition should be deployable in a production state in an on-demand manner. To satisfy this requirement it is often required that other systems are orchestrated together (IPAM, DNS, Compute, etc.). Be Loosely Coupled to the Deployment and Operational Model Our final truth is critical to making sure a Service Model is portable between different environments. From a L4-7 service perspective we want to ensure that the Service definition does not change when the underlying L1-3 infrastrcuture is changed or replaced. While there is always a connection between L1-3 (packets gotta route) and L4-7 (ports gotta listen), the Service Model should always assume a very Loose Coupling between these layers. Additionally, the Service Model should allow any number of Operational models to be implemented without changing. This allows us to address both the needs of new methodologies (DevOps, etc.), while still maintaining support for existing methodologies. Service Model Attributes Now that we've covered our truths lets take a look at our Attributes: Abstracted Declarative Composable Mutable Loosely Coupled Abstracted "The L4-7 service must be appropriately abstracted and leverage implicit configuration whenever possible" We've discussed abstraction quite a bit already so lets focus on implicit configuration. Implicit Configuration means that a Service definition should derive as much of the metadata required to deploy the Service from the environment itself. The base assumption when defining services should be to require user/consumer input as a last resort. Some examples of implicit configuration are: Integrate with IP Address Management (IPAM) systems for IP assignment Automatically enable functionality based on the service metadata Enable X-Forwarded-For HTTP Header insertion ifSNAT is configured Configure base WAF policies that auto-detect common web frameworks Configure HTTP health monitors when using a HTTP well-known port Declarative "The Abstracted Service must be deployable in a single-call declarative manner" Single-call is the key term here. If a Service has been appropriately abstracted it should be deployable with a single API call. One important nuance here is that the outcome of the deployment call may not always be a successful deployment. The impact of not implementing this Attribute is that it results in Imperative Process elements leaking into the Declarative Interface Composable "Appropriate abstraction enables inherently Composable services" This attribute is really a reminder that Appropriate Abstraction and Composability go hand-in-hand. However, the exposed Service Model can be as Constant (static) or Composable (dynamic) as required to meet business requirements. Mutable "The entirety of the Service must be implemented or not" When deploying services it is critical that the toolchain used does not leave the underlying components in a half-configured state. Doing so can create situations that result in production outages because the state of the underlying components becomes inconsistent with the declared intent of the consumer. "The Service Catalog must support inheritance and versioning" To implement mutations of the service catalog it is critical that the catalog itself leverage inheritance and versioning. Inheritance is important so that service metadata defaults can be changed as needed. Versioning is important to ensure that the consumer understands exactly what service is being delivered at any point in time. Service deployments should be transitioned from one version of a service to the next as required. Loosely Coupled "The Service Model should not rely on the underlying expression of a Deployment and Operational Model" One of the goals of the Architecture based approach is to provide a stable foundation for implementing Automated Systems. When we cover the Deployment and Operational Models you will see that the underlying components and methodologies of a system are extremely variable. In order to provide a Portable Service Model it's important to design services that minimize assumptions from Deployment and Operational Model attributes. Service Model - F5 Expression Ok, we've covered a lot. It's time to tie all of this together with some specific examples. This final slide shows an example of how to implement all the Attributes we've discussed using F5 technology. As we've discussed, it's not required to implement every attribute in the first iteration. The slide references some F5 specific technology such as iApp's, iWorkflow (iWf), etc. For context here are links to more documentation for each tool: iApps: https://devcentral.f5.com/s/iapps iWorkflow: https://devcentral.f5.com/s/iworkflow App Services iApp: https://devcentral.f5.com/s/wiki/iapp.appsvcsiapp_index.ashx Wrap up 1 Model down, 2to go! Next we'll cover the Deployment Model. Thanks for reading! Next Article: The Deployment Model for Cloud/Automated Systems Architectures1KViews0likes0CommentsWhat's Happening Inside my Kubernetes Cluster?
Introduction This article series has taken us a long way. We started with an overview of Kubernetes. In the second week we deployed complex applications using Helm, visualized complex applications using Yipee.io. During the third week we enabled advanced application delivery services for all of our pods. In this fourth and final week, we are going to gain visibility into the components of the pod. Specifically, we are going to deploy a microservice application, consisting of multiple pods, and aggregate all of the logs into a single pane of glass using Splunk. You will be able to slice and dice the logs any number of ways to see exactly what is happening down at the pod level. To accomplish visibility, we are going to do four things: Deploy a microservices application Configure Splunk Cloud Configure Kubernetes to send logs to Splunk Cloud Visualize the logs Deploy a Microservices Application As in previous articles, this article will take place using Google Cloud. Log into the Google Cloud Console and create a cluster. Once done, open a Google Cloud Shell session. Fortunately, Eberhard Wolff has already assembled a simple microservices application. First, set the credentials. gcloud container clusters get-credentials cluster-1 --zone us-central1-a We simply need to download the shell script. wgethttps://raw.githubusercontent.com/ewolff/microservice-kubernetes/master/microservice-kubernetes-demo/kubernetes-deploy.sh Next, simply run the shell script. This mayl take several minutes to complete. bash ./kubernetes-deploy.sh Once finished, check to see that all of the pods are running. You will see the several pods that comprise the application. Many of the pods provide small services (microservices) to other pods. kubectl get pods If that looks good, find the external IP address of the Apache service. Note that the address may be pending for several minutes. Run the command until a real address is shown. kubectl get svc apache Put the external IP address into your browser. The simple application have several functioning components. Feel free to try each of the functions. Every click will generate logs that we can analyze later. That was it. You now have a microsevices application running in Kubernetes. But which component is processing what traffic? If there are any slowdowns, which component is having problems? Configure Splunk Cloud Splunk is a set of tools for ingesting, processing, searching, and analyzing machine data. We are going to use it to analyze our application logs. Splunk comes in many forms, but for our purposes, the free trial of the cloud (hosted) service will work perfectly. Go tohttps://www.splunk.com/en_us/download.htmlthen select the Free Cloud Trial. Fill out the form. The form may take a while to process. View the instance. Finally, accept the terms. You now have a Splunk instance for free for the next 15 days. Watch for an email from Splunk. Much of the Splunk and Kubernetes configuration steps are fromhttp://jasonpoon.ca/2017/04/03/kubernetes-logging-with-splunk/. When the email from Splunk arrives, click on the link. This is your private instance of Splunk Cloud that has a lot of sample records. To finish the configuration, first let Splunk know that you want to receive records from a Universal Forwarder, which is Splunk-speak for an external agent. In our case, we will be using the Universal Forwarder to forward container logs from Kubernetes. To configure Splunk, click to choose a default dashboard. Select Forwarders: Deployment. You will be asked to set up forwarding. Click to enable forwarding. Click Enable. Forwarding is configured. Next we need to download the Splunk credentials. Go back to the link supplied in the email, and click on the Universal Forwarder on the left pane. Download Universal Forwarder credentials. We need to get this file to the Google Cloud Shell. One way to do that is to create a bucket in Google Storage. On the Google Cloud page, click on the Storage Browser. Create a transfer bucket. You will need to pick a name unique across Google. I chose mls-xfer. After typing in the name, click Create. Next, upload the credentials file from Splunk by clicking Upload Files. That’s all we need from Splunk right now. The next step is to configure Kubernetes to send the log data to Splunk. Configure Kubernetes to Send Logs to Spunk Cloud In this section we will configure Kubernetes to send container logs to Splunk for visualization and analysis. Go to Google Cloud Shell to confirm the Splunk credential file is visible. Substitute your bucket name for mls-xfer. gsutil ls gs://mls-xfer If you see the file, then you can copy it to the Google Cloud Shell. Again, use your bucket name. Note the trailing dot. gsutil cp gs://mls-xfer/splunkclouduf.spl . If successful, you will have the file in the Google Cloud Shell where you can extract it. tar xvf ./splunkclouduf.spl You should see the files being extracted. splunkclouduf/default/outputs.conf splunkclouduf/default/cacert.pem splunkclouduf/default/server.pem splunkclouduf/default/client.pem splunkclouduf/default/limits.conf Next we need to build a file to deploy the Splunk forwarder. kubectl create configmap splunk-forwarder-config --from-file splunkclouduf/default/ --dry-run -o yaml > splunk-forwarder-config.yaml Before using that file, we need to add some lines near the end of it, after the last certificate. inputs.conf: | # watch all files in [monitor:///var/log/containers/*.log] # extract `host` from the first group in the filename host_regex = /var/log/containers/(.*)_.*_.*\.log # set source type to Kubernetes sourcetype = kubernetes Spaces and columns are important here. The last few lines of my splunk-forwarder-config.yaml file look like this: -----END ENCRYPTED PRIVATE KEY----- inputs.conf: | # watch all files in [monitor:///var/log/containers/*.log] # extract `host` from the first group in the filename host_regex = /var/log/containers/(.*)_.*_.*\.log # set source type to Kubernetes sourcetype = kubernetes kind: ConfigMap metadata: creationTimestamp: null name: splunk-forwarder-config Create the configmap using the supplied file. kubectl create -f splunk-forwarder-config.yaml The next step is to create a daemonset, which is a container that runs on every node of the cluster. Copy and paste the below text into a file named splunk-forwarder-daemonset.yaml using vi or your favorite editor. apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: splunk-forwarder-daemonset spec: template: metadata: labels: app: splunk-forwarder spec: containers: - name: splunkuf image: splunk/universalforwarder:6.5.2-monitor env: - name: SPLUNK_START_ARGS value: "--accept-license --answer-yes" - name: SPLUNK_USER value: root volumeMounts: - mountPath: /var/run/docker.sock readOnly: true name: docker-socket - mountPath: /var/lib/docker/containers readOnly: true name: container-logs - mountPath: /opt/splunk/etc/apps/splunkclouduf/default name: splunk-config - mountPath: /var/log/containers readOnly: true name: pod-logs volumes: - name: docker-socket hostPath: path: /var/run/docker.sock - name: container-logs hostPath: path: /var/lib/docker/containers - name: pod-logs hostPath: path: /var/log/containers - name: splunk-config configMap: name: splunk-forwarder-config Finally, create the daemonset. kubectl create -f splunk-forwarder-daemonset.yaml The microservice app should be sending logs right now to your Splunk Cloud instance. The logs are updated every 15 minutes, so it might be a while before the entries show in Splunk. For now, explore the micro services ordering application so that log entries are generated. Feel free also to explore Splunk. Visualize the Logs Now that logs are appearing in Splunk, go to the link on the email from Splunk. You should see a dashboard with long entries representing activity in the order processing application. Immediately on the dashboard you can see several options. You can drill down the forwarders by status. Further down the page you can see a list of the forwarding instances, along with statistics. Below that is a graph of activity across the instances. Explore the Splunk application. The combination of logs from several pods provides insight into the activity among the containers. Clean Up When you aredone exploring, cleaning up is a breeze. Just delete the cluster. Conclusion This article series helped scratch the surface of what is possible with Kubernetes. Even if you had no Kubernetes knowledge, the first article gave an overview and deployed a simple application. The second article introduced two ways to automate deployments. The third article showed how to integrate application delivery services. This article closed the loop by demonstrating application monitoring capabilities. You are now in a position to have a meaningful conversation about Kubernetes with just about anyone, even experts. Kubernetes is growing and changing quickly, and welcome to the world of containers. Series Index Deploy an App into Kubernetes in less than 24 Minutes Deploy an App into Kubernetes Even Faster (Than Last Week) Deploy an App into Kubernetes Using Advanced Application Services What's Happening Inside my Kubernetes Cluster?784Views0likes0CommentsBIG-IP deployments using Ansible in private and public cloud
F5 has been actively developing Ansible modules that help in deploying an application on the BIG-IP. For a list of candidate modules for Ansible 2.4 release refer to the Github link. These modules can be used to configure any BIG-IP (physical/virtual) in any environment (Public/Private or Hybrid cloud) Before we can use the BIG-IP to deploy an application, we need to spin up a virtual edition of the BIG. Let’s look at some ways to spin up a BIG-IP in the Public and Private cloud Private cloud Create a BIG-IP guest VM through VMware vSphere For more details on the ansible module refer to Ansible documentation Pre-condition: On the VMware a template of the BIG-IP image has been created Example Playbook: - name: Create VMware guest hosts: localhost connection: local become: true tasks: - name: Deploy BIG-IP VE vsphere_guest: vcenter_hostname: 10.192.73.100 //vCenter hostname or IP address esxi: datacenter: F5 BD Lab //Datacenter name hostname: 10.192.73.22 //esxi hostname or IP address username: root //vCenter username password: ***** //vCenter password guest: “BIGIP-VM” //Name of the BIG-IP to be created from_template: yes template_src: "BIG-IP VE 12.1.2.0.0.249-Template" //Name of the template Spin up a BIG-IP VM in VMWARE using govc For more details on the govc refer to govc github and vmware github Pre-condition: govc has been installed on the ansible host Example Playbook: - name: Create VMware guest hosts: localhost connection: local tasks: - name: Import OVA and deploy BIG-IP VM command: "/usr/local/bin/govc import.ova -name=newVM BIGIP005 /tmp/BIGIP-12.1.2.0.0.249.LTM-scsi.ova" //Command to import the BIG-IP ova file environment: GOVC_HOST: "10.192.73.100" //vCenter hostname or IP address GOVC_URL: "https://10.192.73.100/sdk" GOVC_USERNAME: "root" //vCenter username GOVC_PASSWORD: "*******" //vCenter password GOVC_INSECURE: "1" GOVC_DATACENTER: "F5 BD Lab" //Datacenter name GOVC_DATASTORE: "datastore1 (5)" //Datastore on where to store the ova file GOVC_RESOURCE_POOL: "Testing" //Resource pool to use - name: Power on the VM command: "/usr/local/bin/govc vm.power -on newVM-BIGIP005" environment: GOVC_HOST: "10.192.73.100" GOVC_URL: "https://10.192.73.100/sdk" GOVC_USERNAME: "root" GOVC_PASSWORD: "vmware" GOVC_INSECURE: "1" GOVC_DATACENTER: "F5 BD Lab" GOVC_DATASTORE: "datastore1 (5)" GOVC_RESOURCE_POOL: "Testing" Public Cloud Spin up a BIG-IP using cloud formation templates in AWS For more details on the BIG-IP cloud formation templates, refer to the following Github Page Pre-condition: Cloud formation JSON template has been downloaded to the Ansible host Example Playbook: - name: Launch BIG-IP CFT in AWS hosts: localhost gather_facts: false tasks: - name: Launch BIG-IP CFT cloudformation: aws_access_key: "******************" //AWS access key aws_secret_key: "******************" //AWS secret key stack_name: "StandaloneBIGIP-1nic-experimental-Ansible" state: "present" region: "us-west-2" disable_rollback: true template: "standalone-hourly-1nic-experimental.json" //JSON blob for the CFT template_parameters: //template parameters availabilityZone1: "us-west-2a" sshKey: "bigip-test" validate_certs : false register: stack - name: Get facts(IP-address) from a cloud formation stack cloudformation_facts: aws_access_key: "*****************" aws_secret_key: "*****************" region: "us-west-2" stack_name: "StandaloneBIGIP-1nic-experimental-Ansible" register: bigip_ip_address - set_fact: //Extract the BIG-IP MGMT IP address ip_address: "{{ bigip_ip_address['ansible_facts']['cloudformation']['StandaloneBIGIP-1nic-experimental-Ansible']['stack_outputs']['Bigip1subnet1Az1SelfEipAddress']}}" - copy: //Copy the BIG-IP MGMT IP address to a file content: "bigip_ip_address: {{ ip_address}}" dest: "aws_var_file.yaml" //Copied IP address can be be referenced from file mode: 0644 Above mentioned are few ways to spin up a BIG-IP Virtual edition in your private/public cloud environment. Once the BIG-IP is installed then use the F5 ansible modules to deploy the application on the BIG-IP. Refer to DevCentral article to learn more about ansible roles and how we can use roles to onboard and network a BIG-IP. Included is a simple playbook that you can download and run against the BIG-IP. - name: Onboarding BIG-IP hosts: bigip //bigip variable should be present in the ansible inventory file gather_facts: false tasks: - name: Configure NTP server on BIG-IP bigip_device_ntp: server: "<bigip_ip_address >" user: "admin" password: "admin" ntp_servers: "172.2.1.1" validate_certs: False delegate_to: localhost - name: Configure BIG-IP hostname bigip_hostname: server: "<bigip_ip_address >" user: "admin" password: "admin" validate_certs: False hostname: "bigip1.local.com" delegate_to: localhost - name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "<bigip_ip_address >" user: "admin" password: "admin" banner: "enabled" banner_text: "Welcome- CLI username/password to login " validate_certs: False delegate_to: localhost - name: Manage BIG-IP DNS settings bigip_device_dns: server: "<bigip_ip_address >" user: "admin" password: "admin" name_servers: "172.2.1.1" search: "localhost" ip_version: "4" validate_certs: False delegate_to: localhost For more information on BIG-IP ansible playbooks visit the following github link772Views0likes2CommentsThe Hitchhiker’s Guide to BIG-IP in Azure – “Life Cycle Management”
Hello fellow travelers and welcome to the fourth and final installment of “The Hitchhiker’s Guide to BIG-IP in Azure”. In the spirit of teamwork, (and because he’s an even bigger Sci-fi nerd than me), I’ve asked my colleague, Patrick Merrick , to provide the commentary for this final installment. Take it away Patrick! Hi travelers! No doubt you have been following the evolution of this blog series as Greg navigated Azure specific topics describing how cloud based services differ from our traditional understanding of how we position the BIG-IP in an on premises’ deployment. If not I have provided the predecessors to this post below for your enjoyment! The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” The Hitchhiker’s Guide to BIG-IP in Azure – “High Availability” To carry on the theme, I have decided to also take a page from the legendary author Douglas Adams to help explain F5’s position on life cycle management. Life cycle management historically can be likened to the Infinite Improbability Drive. Regardless of best intentions, you rarely end up in the space the you had intended, but generally where you needed to be. For those of you who are not “in the know”, I have left a brief description of said improbability drive below. “The infinite improbability drive is a wonderful new method of crossing interstellar distances in a mere nothing of a second, without all that tedious mucking about in hyperspace. It was discovered by lucky chance, and then developed into a governable form of propulsion by the Galactic Government's research centre on Damogran.” - Douglas Adams, “The Hitchhiker's Guide to the Galaxy” In my previous life, I was a consultant and have had the duty of integrating solutions into previously architected infrastructures without causing disruption to end users. In this “new’ish” world of Dev Ops or “Life at cloud speed”, we are discovering that Life Cycle management isn’t necessarily tied to major releases and minor updates. With that said let’s dispense with the Vogon bureaucratic method, grab our towels and wade into deep water. “According to the Guide, the Vogons are terribly bureaucratic and mean. They're not going to break the rules in order to help you. On the other hand, they're not exactly evil—they're not going to break the rules in order to harm you, either. Still, it may be hard to remember that when you're being chased by the Ravenous Bugblatter Beast of Traal while the Vogons are busy going through the appropriate forms” - Douglas Adams, “The Hitchhiker's Guide to the Galaxy” Azure Instance Type Upgrades As you have come to expect F5 has published recommendations for configuring your instance in Azure, Your instance configuration will rely largely on what modules you would like to provision in your infrastructure, but this topic is well covered in the following link BIG-IP® Virtual Edition and Microsoft Azure as always what is not “YET” covered in the deployment guide can likely be found on DevCentral. If you were to find yourself in a scenario where you need to manipulate an instance of BIG-IP this process has been well documented by TechNet How to: Change the Size of a Windows Azure Virtual Machine and can be achieved by utilizing the following mechanisms. Azure Management Portal There is little bureaucracy from the management portal aside from logging in and choosing your desired settings. whether you are looking to increase cores or memory and then ultimately choosing the ‘Save’ button you are well served here. PowerShell Script One could argue that there is a bit more Vogon influence here, but I would contest that your flexibility from the programmatic perspective is significantly more robust. Aside from being confined by PowerShell parameters and variables, but also well outlined in the TechNet article above. BIG-IP OS Upgrades More good news! But first another Douglas Adams quote. “There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.” - Douglas Adams “The Hitchhiker's Guide to the Galaxy” Upgrading a BIG-IP in Azure is no different than updating any VE or our physical appliances for that matter. Download the ISO and MD5 files. Install the downloaded files to an inactive boot location. Boot the BIG-IP VE to the new boot location. Tip: If there is a problem during installation, you can use log messages to troubleshoot a solution. The system stores the installation log file as /var/log/liveinstall.log. If you are new to this process, more detailed information can be found by reviewing yet another knowledge center article; Updating and Upgrading BIG-IP VE. Utilize traditional recommended best practices I don’t normally start paragraphs off with a quote but when I do its Douglas Adams. “You know,” said Arthur, “it’s at times like this, when I’m trapped in a Vogon airlock with a man from Betelgeuse, and about to die of asphyxiation in deep space that I really wish I’d listened to what my mother told me when I was young.” “Why, what did she tell you?” “I don’t know, I didn’t listen.” - Douglas Adams, “The Hitchhiker's Guide to the Galaxy” Before attempting any of the aforementioned solutions please be sure that you have a valid backup of your configuration Backing up your BIG-IP system configuration. A/S upgrade In this scenario, you would have a device group that also has ConfigSync enabled. This is a high-availability feature that synchronizes configuration changes from one BIG-IP to the other This feature ensures that the BIG-IP device group members maintain the same configuration data and work in tandem to more efficiently process application traffic. At a high level, we will start with the passive node first and use the following steps to accomplish this task. More detailed information can be found by reviewing the following article Introduction to upgrading version 11.x, or later, BIG-IP software. Preparing BIG-IP modules for an upgrade Preparing BIG-IP device groups for an upgrade Upgrading each device within the device group Changing states of the traffic groups Configuring HA groups (if applicable) Configuring module-specific settings Verifying the software upgrade for the device group Additional Links: The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” he Hitchhiker’s Guide to BIG-IP in Azure – “High Availability” BIG-IP in Azure? Are You Serious? F5 Networks GitHub Understand the structure and syntax of Azure Resource Manager templates Deploying BIG-IP Virtual Edition in Azure BIG-IP Systems: Upgrading Software723Views0likes0CommentsSecure Your New AWS Application with an F5 Web Application Firewall: Part 2 of 4
In Part 1 of our series, we used a CloudFormation Template (CFT) to create a repeatable deployment of our application in AWS. Our app is running in the cloud, our users are connecting to it, and we’re serving traffic. But more importantly, we’re selling our products. However, after a bad experience with our application falling down, we realized the hard way that it's not secure anymore. The challenge Our app in the cloud is getting hacked The solution Add a scalable web application firewall (WAF) In the data center, we had edge security measures that protected our application. In the cloud, we no longer have this. We’re now vulnerable to attacks. This doesn’t mean that Amazon is not a secure cloud environment; it means that we need to secure our application and its data in the cloud. With a little research, we found that Amazon has a shared responsibility model for security. They take responsibility “of” the cloud, and as an organization hosting an application in AWS, we’re responsible for security “in” the cloud. For more information, see https://aws.amazon.com/compliance/shared-responsibility-model/. In our last article, we showed a fairly simple setup in AWS. Now, to secure our application, we’re going to add a BIG-IP VE web application firewall (WAF) cluster. Not only will this secure our application, but it takes advantage of AWS Auto Scaling, addingmore BIG-IP VE instances when traffic or CPU load requires it. To create and configure this Auto Scaling WAF, F5 provides a CloudFormation template. This template and others are available on GitHub. This CFT assumes that you already have a VPC with multiple subnets, each in a different availability zone. If you ran our CFT from last week, you should have this already. You must also create a classic AWS ELB that will go in front of the BIG-IP VE instances. This ELB should listen on port 80 and have a health check for TCP port 8443, like this: The ELB should also have a security group associated with it. This group should have the following Inbound ports open: 22 (for SSH access to BIG-IP VE), 8443 (for the BIG-IP VE Configuration utility), and 80 (for the web app). Before you deploy the template, gather this information: The AWS ELB name (the one that will go in front of the BIG-IP VEs), for example, BIGIPELB. The VPC, subnet, and security group names/IDs The DNS name for the ELB in front of the app servers, for example:Test-StackELB-55UMG84080MI-342616460.us-east-2.elb.amazonaws.com. When you deploy the template, an auto scaling group, launch configuration, and BIG-IP VE instance are created. You can connect to the website by using the BIG-IP ELB address, for example: http://bigipelb-1631946395.us-east-2.elb.amazonaws.com/. The ID of the server you're connected to is displayed on the top menu bar. If you want to access BIG-IP VE, you can use SSH to connect to the instance. Then you can set the admin password (tmsh modify auth password admin), and connect to the BIG-IP VE Configuration utility (https://PublicIP:8443). The BIG-IP VE instances that make up the WAF cluster are licensed hourly, and they automatically license themselves when they are launched. They come in different throughput limits. We’re testing right now, so we’re going to start with a 25 Mbps image on a small AWS instance type (2 vCPU, 4 G memory). Later, when we go to production, we can update the throughput and AWS instance type. Maintenance of the WAF Cluster The challenge Over time, the WAF cluster needs updates The solution Update the CloudFormation stack without bringing down the cluster You’ve got your Auto Scaling WAF up and running and it’s sending notifications about traffic that it’s analyzing. When we created this deployment, we specified 25 Mbps as the throughput limit for our BIG-IP VE instances. But now we’re selling millions of packets of our hotdog-flavored lemonade and it’s time to add some resources. The good news is that you can simply re-run the CloudFormation stack and update the settings. New instances will be launched and old instances will be terminated. Traffic will continue to be processed during this time. To ensure the new BIG-IP VE instances have the same configuration as the ones you’re terminating, you must save off the BIG-IP VE configuration before you re-deploy. IS THIS REALLY POSSIBLE? Yes. This is possible. The WAF keeps running, customers keep buying, and the lemonade packets are flying out the door. For example, let’s say we want to increase BIG-IP VE throughput, the # of BIG-IP VE instances, and the AWS instance type. To do this, we: Back up the BIG-IP VE to a .ucs file Save the .ucs file to the S3 bucket that was created when we deployed the CFT Re-deploy the CFT and choose different settings. For more information, watch this video that shows how it works. Over time you can also: Upgrade to newer versions Apply hotfixes, security fixes, etc. This same process applies; you can update your running configuration without losing your changes. The bummer is, if you're a developer, you may not want to manage and maintain a WAF. Never fear, you have options! Part 3 will address this issue.668Views0likes0CommentsSuccessfully Deploy Your Application in the AWS Public Cloud: Part 1 of 4
In this series of articles, we're going to walk you through a fairly typical lift-and-shiftdeployment of BIG-IP in AWS, so that: If you’re just starting, you can get an idea of what lies ahead. If you’re already working in the cloud, you can get familiar with a variety of F5 solutions that will help your application and organization be successful. The scenario we've chosen is pretty typical: we have an application in our data center and we want to move it to the public cloud. As part of this move, we want to give development teams access to an agile environment, and we want to ensure that NetOps/SecOps maintains the stability and control they expect. Here is a simple diagram for a starting point. We’re a business that sells our products on the web. Specifically, we sell a bunch of random picnic supplies and candies. Our hot seller this summer is hotdog-flavored lemonade, something you might think is appalling but that really encompasses everything great about a picnic. But back to the scenario: We have a data center, where we have two physical BIG-IPs that function as a web application firewall (WAF), and they load balance traffic securely to three application servers. These application servers get their product information from a product database. Our warehouse uses a separate internal application to manage inventory, and that inventory is stored in an inventory database. In this series of articles, we’ll show you how to move the application to Amazon Web Services (AWS), and discuss the trade-offs that come at different stages in the process. So let’s get started. The challenge Move to the cloud; keep environments in sync The solution Use a CloudFormation Template (CFT) to create a repeatable cloud deployment We’ve been told to move to the cloud, and after a thorough investigation of the options, have decided to move our picnic-supply-selling app to Amazon Web Services. Our organization has several different environments that we maintain. Dev (one environment per developer) Test UAT Performance Production These environments can tend to be out of sync with one another. This frustrates everyone. And when we deploy the app to production, we often see unexpected results. If possible, we don’t want to bring this problem along to the cloud. We want to deploy our application to all of these environments and have the result be the same every time. Even if each developer has different code, all developers should be working in an infrastructure environment that matches all other environments, most importantly, production. Enter the AWS CloudFormation template. We can create a template and use it to consistently spin up the same environment. If we require a change, we can make the modification and save a new version of the CFT, letting everyone on the team know about the change. And it’s version-controlled, so we can always roll back if we mess up. So we use a CFT to create our application servers and deploy the latest code on them. In our scenario, we create an AWS Elastic Load Balancer so we can continue load balancing to the application servers. Our product data has a dependency on inventory data that comes from the warehouse, and we use BIG-IP for authentication (among other things). We use our on-premise BIG-IPs to create an IPSEC VPN tunnel to AWS. This way, our application can maintain a connection to the inventory system. When we get the CFT working the way we want, we can swing the DNS to point to these new AWS instances. Details about the CloudFormation template We put a CFT on github that you can deploy to demonstrate the AWS part of this setup. It may help you visualize this deployment, and in part 2 of this series, we'll be expanding on this initial setup. If you'd like, you can deploy by clicking the following button. Ensure that when you're in the AWS console, you select the region where you want to deploy. And if you're really new to this, just remember that active instances cost money. The CFT creates four Windows servers behind an AWS Elastic Load Balancer (ELB). Three of the servers are running a web app and one is used for the database. Beware, the website is a bit goofy and we were feeling punchy when we created it. Here is a brief explanation of what specific sections of the CFT do. Parameters The Parameters section includes fields you must populate when deploying the CFT. In this case, you’ll have to specify a name for your servers, and the AMI (Amazon Machine Image) ID to build the servers from. In the template, you can see what parameters look like. For example, the field where you enter the AMI ID: "WindowsAMI": { "Description": "Windows Version and Region AMI", "Type": "String", } To find the ID of the AMI you want to use, look in the marketplace, find the product you want, click the Manual Launch tab, and note the AMI ID for the region where you’re going to deploy. We are using Microsoft Windows Server 2016 Base and Microsoft Windows Server 2016 with MSSQL 2016. Note: These IDs can change; check the AWS Marketplace for the latest AMI IDs. Resources The resources section of the CFT performs the legwork. The CFT creates a Virtual Private Cloud (VPC) with three subnets so that the application is redundant across availability zones. It creates a Windows Server instance in each availability zone, and it creates an AWS Elastic Load Balancer (ELB) in front of the application servers. Code that creates the load balancer: "StackELB01": { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "Subnets" : [ { "Ref": "StackSubnet1" }, { "Ref": "StackSubnet2" }, { "Ref": "StackSubnet3" } ], "Instances": [ { "Ref": "WindowsInstance1" }, { "Ref": "WindowsInstance2" }, { "Ref": "WindowsInstance3" } ], "Listeners": [ { "LoadBalancerPort": "80", "InstancePort": "80", "Protocol": "HTTP" } ], "HealthCheck": { "Target": "HTTP:80/", "HealthyThreshold": "3", "UnhealthyThreshold": "5", "Interval": "30", "Timeout": "5" }, "SecurityGroups":[ { "Ref": "ELBSecurityGroup" } ] } Then the CFT uses Cloud-Init to configure the Windows machines. It installs IIS on each machine, sets the hostname, and creates an index.html file that contains the server name (so that when you load balance to each machine, you will be able to determine which app server is serving the traffic). It also adds your user to the machine’s local Administrators group. Note: This is just part of the code. Look at the CFT itself for details. "install_IIS": { "files": { "C:\\Users\\Administrator\\Downloads\\firstrun.ps1": { "content": { "Fn::Join": [ "", [ "param ( \n", " [string]$password,\n", " [string]$username,\n", " [string]$servername\n", ")\n", "\n", "Add-Type -AssemblyName System.IO.Compression.FileSystem\n", "\n", "## Create user and add to Administrators group\n", "$pass = ConvertTo-SecureString $password -AsPlainText -Force\n", "New-LocalUser -Name $username -Password $pass -PasswordNeverExpires\n", "Add-LocalGroupMember -Group \"Administrators\" -Member $username\n", The CFT then calls PowerShell to run the script. "commands": { "b-configure": { "command": { "Fn::Join": [ " ", [ "powershell.exe -ExecutionPolicy unrestricted C:\\Users\\Administrator\\Downloads\\firstrun.ps1", { "Ref": "adminPassword" }, { "Ref": "adminUsername"}, { "Ref": "WindowsName1"}, "\n" ] Finally, this section includes signaling. You can use the Cloud-Init cfn-signal helper script to pause the stack until resource creation is complete. For more information, see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-signal.html. Sample of signaling: "WindowsInstance1WaitHandle": { "Type": "AWS::CloudFormation::WaitConditionHandle" }, "WindowsInstance1WaitCondition": { "Type": "AWS::CloudFormation::WaitCondition", "DependsOn": "WindowsInstance1", "Properties": { "Handle": { "Ref": "WindowsInstance1WaitHandle" }, "Timeout": "1200" } } Outputs The output includes the URL of the AWS ELB, which you use to connect to your applications. "Outputs": { "ServerURL": { "Description": "The AWS Generated URL.", "Value": { "Fn::Join": [ "", [ "http://", { "Fn::GetAtt": [ "StackELB01", "DNSName" ] } This output is displayed in the AWS console, on the Outputs tab. You can use the link to quickly connect to the ELB. When we're done deploying the app and we’ve fully tested it in AWS, we can swing the DNS from the internal address to the AWS load balancer, and we’re up and running. Come back next week as we implement BIG-IP VE for security in AWS.464Views0likes0CommentsCloud/Automated Systems need an Architecture
Introduction Architecture. The physical world we inhabit is built on it, literally. One of the great parts about working in technology is that it has allowed me to travel the world and appreciate many things, architecture being one of them. Up until recently I’ve always seen these two worlds, architecture and technology as being very seperate. That changed recently... through many different events professionally and personally my brain finally got out of it’s own way and started grasping for a link between these two forces in my mind. One of the many ah hah! moments I’ve had over the past year resulted in what I’m writing about today. So, if you’ll indulge me… lets tell a story! For years I’ve been trying to pin down what ‘Cloud’ is. It’s not an uncommon question. In fact, the term Cloud itself has become one that induces panic attacks and eye twitching the world around. Working with customers everyday I saw two scenarios play out over and over again: I meet with a customer, we agree that something needs to be done related to ‘Cloud’… we try and try for months, even years, and never, jointly, reach a state of production readiness. Pressure builds and builds until something or someone breaks and then a new strategy is proposed and we rinse and repeat. I meet with a customer, learn the organization has made a decision regarding the ‘Cloud’ vendor/solution/technology of choice. A number of requirements are laid out, we jointly try to meet these requirements, and eventually end up in the spiral of desperation that is ‘descope, deprioritize’. What does get implemented is a shell of the original vision and rarely meets the original business objectives. For years we’ve been faced with project after project like those described above. Sure, there have been successes. But if i’m being honest with myself, not near enough. This got me thinking about architecture; the process of designing something functional and aesthetically pleasing within the constraints of how we can manipulate the physical world. How does an architect see their creation come to life? Do they just start digging holes and welding beams? No! They build models of their vision. Those models are then sent to other groups that specialize in things like HVAC, Electrical and Plumbing. They are refined time and time again to adhere to the constraints of the real world without losing sight of the original vision. Only once these models are finalized is construction started. While Architecture is rooted in a creative process it’s ultimate expression into the real world is bound by the rules of physics, budgets and timelines. Can we apply this same methodology to the design of Automated Systems? Yes! Does an Architecture that properly addresses Layer 4-7 services exist? No! My colleagues and I stepped back from trying to just do ‘something’. We stopped. We started to build our models. We refined and tested. What resulted is a generalized Architecture for Automated Systems. In this series of articles we will explore this Architecture and how F5 and our customers can use it to ‘build a model’ that can then be expressed into real-world implementations. To get started this article will layout the foundational concepts for Automated Systems and then use that knowledge to build out the foundational models for our Architecture. Automation Concepts One of the great successes over the past year has been a free ’Intro to Automation & Orchestration’ class that F5 has developed and delivered to customers worldwide (if you are interested in taking the class contact your account team). The story of this course and the DevOps methodology behind it will be detailed in a separate article, however, the concepts we teach in the class form the foundation for our Architecture. Those concepts are: Appropriate Abstraction Domain Specific Knowledge Source of Truth Imperative Processes Declarative Interfaces Orchestration & DevOps Appropriate Abstraction In order to successfully automate complex systems you must understand the concept of Abstration. More importantly, to abstract in an appropriate manner. To explain this concept lets use the following slide: On the left you have a pile of lumber. In the middle you have a mostly built house. On the right you have a fully completed house. Now, imagining that you are in the market for a house, lets examine each. The pile of lumber represents the fully custom house. If you want to turn that pile of lumber into a house you would have to learn lots of skills and invest a large amount of time and effort into building that house. The mostly built house represents the semi-custom, new construction home that allows you to pick a floorplan that you like and customize some of the finishes in the house. The reason this type of home is so prevalent, is because the builder can leverage economies of scale. The home owner also benefits because they essentially pick what is important to them and the builder takes care of the rest. The completed home represents the pre-existing home that you would purchase from an existing home owner. If you’ve ever purchased an existing home you’ll know that most of the time the purchase of the home is just the first step. Some refresh and renovation of the home is usually required to suit your individual needs. If you’ve ever done this you'll know that changing an existing home can open a Pandora’s box of issues that become very costly. How does this link back to technology? Lets map these concepts over: Pile of Lumber: What most systems look like today. Fully customizable but NOT repeatable. Large requirement for expert level knowledge of the system(s). Long lead times for deployment. Mostly Built House: What we should actually work towards. Customizable within reason, but repeatable. Lowered requirement for expert level knowledge. Predictable lead times for deployment. Pre-Existing Home: What everyone tries to sell you. The proverbial ‘easy-button’. The ‘cloud-in-a-box’. Sure, you get something that works. However, changing anything inside to suit the needs of your business usually opens a Pandora’s box of issues. So, back to Appropriate Abstraction. The key idea here is to make sure that as you abstract services in an Automated System, it’s done in a manner that leads to the ‘Mostly-built house’. Doing this requires us to understand that not every system or service can be automated. There will always be a need for the custom built services. The decision behind whether to abstract a service should be based on achieving economies of scale rather than just ‘automate everything’. Conversely, providing an ‘easy-button’ does one thing; force a vendors expression of a use case onto your environment. That may be ok with simple services and systems, however, this does not represent the majority of systems and applications in real-world environments. Appropriate Abstraction allows you to ‘assemble the button’ for others to push. Domain Specific Knowledge Now that we’ve explained Appropriate Abstraction, lets take a look at Domain Specific Knowledge. Domain Specific Knowledge is the specific knowledge that an individual (or system) must have before they can complete a process. Using the example above, constructing a new home from a pile of lumber would require a very high level of Domain Specific Knowledge in many trades (concrete, framing, electrical, HVAC, painting, tile, etc). Conversely, purchasing a fully built house (with the assumption that nothing needs to be renovated) requires very low Domain Specific Knowledge as it relates to home construction. Why talk about this? Well, you’ll see in the following sections that the level of Domain Specific Knowledge has a direct impact on how various automated systems work together (or don’t) on the path to production deployments. Furthermore, systems are built by people. It is well known that people, while very capable, cannot keep up with the rate of change of all the underlying systems. Therefore the only solution is to limit the NUMBER of systems that have to be learned rather than limiting the DEPTH of knowledge in those systems. In other words narrow but deep instead of wide and shallow. Source of Truth A Source of Truth (SOT) is defined as a system or object that contains the authoritative representation of a service and it’s components. For example, in traditional environments the SOT is the running configuration on a particular device. When automating systems it is critical to understand that the SOT may NOT reside on the device itself. While each device will have a version of the running configuration, we make a distinction that the authoritative source for that data may be somewhere else (off-device). This distinction has some implications: Changes for a service should be pushed or pulled from the SOT to sub-ordinate devices Out-of-band changes must be handled very carefully (preferably, totally avoided) The SOT must provide security for service metadata. Implementing a single SOT, for a single technology vendor is complicated. When multi systems are joined together via Orchestration the problem becomes much harder. In this instance it is important to make the distinction between a Truth and the Source of that Truth. A Truth is simply a piece of data. That Truth can be distributed and manipulated by multiple systems as long as the authoritative source of that Truth is well defined and consistent. In complex systems there are often multiple sources of truth. The top-level Source of Truth only knows the information contained in the Abstracted representation of a service. Vendor-specific automation tools may apply more data during automated operations as you move to less abstracted interfaces of the service. As long as each Truth is tied to one and only one Source of Truth things work fine. Imperative Processes An Imperative Process is simple. You execute thousands of imperative processes every day. An imperative process is the step-by-step set of actions required to achieve an outcome. A simple example is making a jam sandwich. This process can be separated into a sequence of ordered steps: Gather Ingredients Bread Butter Strawberry Jam Butter 2 slices of bread Spread some strawberry jam on one of the slices of bread Place the second slide on top of the first slice Cut the sandwich in half and enjoy! Now, lets say we have a friend over for lunch one day who doesn't share your specific sandwich preferences. The complexity around Imperative Processes arises when you have to apply customizations, or ‘branches’, to the process. At every step of the process above you have the potential for options. For example, lets say your friend has the following requirements: Can’t have butter due to cholesterol issues Is allergic to strawberries Has their arm in a cast due to a boating accident Prefers their sandwich is cut though the center, not diagonally (seriously!) Could the process above be used to create this ‘custom’ sandwich? No. Instead we branch the process at each step based on the requirements. The resulting process starts to get very complicated. If you imagine building this process as a tree each ‘option’ results in another branch. If you try to enumerate all those branches to their outcome you can see how we quickly reach a scenario where the set of problems is unsolvable. The main take away from this concept should be that we must ‘prune the tree’. While Imperative Processes will always be required, it’s the job of the expert for a particular technology, or solution, to understand which use cases can be appropriately abstracted. From there you must minimize the number of branches to the lowest set that delivers the service as intended. Declarative Interfaces So lets take the sandwich analogy one step further. People have gotten wind of the superior quality of your jam sandwiches. You’ve decided to start a Reggae-themed Jam Sandwich restaurant called Jammin’. How can you deliver your jam sandwiches to the masses? The answer is something almost everyone is familiar with… the ubiquitous Drive-Thru. The Drive-Thru concept is a perfect illustration of a Declarative Interface. Consumers simply declare the sandwich they want from a pre-defined menu. Some options for customization are present, however, these options are limited because the intent of the Drive-Thru is to deliver jam sandwiches as fast as possible for low, low prices. The process behind making the sandwich (and all the logistics of running a restaurant) are totally abstracted away from the consumer. When looking at Automated Systems it’s important to understand that when you properly combine Appropriate Abstraction with Imperative Processes the result is a Declarative Interface that should require a low level of Domain Specific Knowledge to consume. The underlying Imperative Processes could be simple or complex, however, that complexity does not have to be exposed to the consumer of the service. Orchestration & DevOps For years the belief has been that a top-level orchestrator should implement all the Imperative Processes required for EVERY technology component in an Automated System. This assumption has huge implications. You are exponentially increasing the requirement for Domain Specific Knowledge across an organization. Going forward, orchestration needs to be done differently. Orchestration should consume abstracted, declarative interfaces ONLY. This allows the Domain Specific Knowledge required for one system (e.g. F5 BIG-IP) to be de-coupled from the Domain Specific Knowledge required by the Orchestration system (Ansible, vRo, etc.) By focusing on Abstraction andDeclarative Interfaces, Orchestration in a large system is possible without a requirement for Domain Specific Knowledge in every technology component. If these rules are followed the resulting interfaces allow integration of the Automated System with Agile and/or DevOps methodologies. Adopting DevOps methodologies requires organizational (people) change, however, once that change is in progress the underlying systems must provide interfaces that seamlessly integrate with the DevOps methodology and toolchain. The Architecture The Fire Triangle The picture below is something that may seem familiar. It’s a depiction of the ‘fire triangle’. This picture is used to convey the concept that combustion requires three components for a sustained chain reaction: Oxygen Heat Fuel The premise is simple. If you want to create fire, you need all three. On the other hand if you have to extinguish a fire you only need to remove one component from the triangle. The age old ‘stop, drop and roll’ technique actually extinguishes fire by eliminating the Oxygen component (the rolling motion essentially chokes the fire of oxygen). What does this have to do with our Architecture? Well, much like fire needs three components to burn; Automated Systems require three separate models, working together, to be successful. If any one of these models falls apart, our chances of success are extinguished. The Cloud/Automation Triangle Throughout the Architecture we will discuss a set of ‘Truths’ and ‘Attributes’ that apply to the Architecture and it’s component Models. The Truths are assumptions and rules that must be made which cannot be broken. Attributes are less strictly defined, however, must adhere to the Truths in their parent model and Architecture. Experience has guided us in creating three discreet models that must work together seamlessly to deliver on the promise of Automated systems: Service Model Deployment Model Operational Model Each of the components must be well defined and serve to form a stable foundation for building Automated Systems over time. We will cover each of these models in detail throughout this series of articles. Evolution of a Model At the beginning of this article I explained how Architects iterate over their vision until they have enough in place to start construction. This iteration is key to how we actually meet business and production objectives over time. Rather than trying to define, in detail, how every objective is met we adopt the DevOps concepts of Continuous Improvement (CI) and Continuous Deployment (CD). The idea is to implement each of the models discussed above in phases that form a feedback loop: To support this (and more fundamentally, DevOps methodologies), the Architecture must leverages CI/CD as a base Truth. As we iterate over deployments the insights, challenges and shortcomings of the current Production Phase deployment should be prioritized and ordered then fed back into a Design Phase. A new iteration of the Models that addresses those challenges is then deployed to Production. The overall goal is to leverage DevOps CI/CD methodologies and toolchain to enable constant iteration of the underlying models in the Architecture until a steady-state is achieved (if that ever really happens). In short, don’t try and do everything all at once. Instead define what that is and then break that down into iterations of each model to acheive that end state. Architectural Truths As explained in the previous section, a set of Truths is required to bind ourselves within production realities. From the Architectural level these truths are: Enable a DevOps Methodology and Toolchain Lower or eliminate Domain Specific Knowledge Leverage Infrastructure-as-Code Don’t sacrifice Functionality to Automate Provide a Predictable Cost Model Enable delivery of YOUR innovation to the market Some of these points have already been discussed throughout this article. Rather than repeating we will focus on the specific items that have not been discussed: Leverage Infrastructure-as-Code One of the key concepts we discussed earlier was Source of Truth. In order to adhere to the guidelines around SOT, it’s important to treat service metadata as code. This means that all metadata should be contained within a Source of Truth that naturally maintains access control, revision histories and the ability to compare metadata at different points in time. Of those already adopting an Infrastructure-as-Code model, the majority of deployments leverage Source Code Management tools such as Git for these functions, however, many other solutions exist. The common thread between all of these tools is that configuration truths and metadata are handled with the same lifecycle process as developer's source code. Don’t sacrifice functionality to automate This truth speaks to two different points: The decision to automate a system or service. If critical functionality is given up for the sake of automation then a different decision has to be made. Rather than sacrificing functionality it is important that vendors and customers work to define how advanced functionality can be automated as much as possible and work to that goal. An understanding that if functionality is being sacrificed then maybe the system or service was not abstracted properly to begin with. Or, maybe it can’t be abstracted. Either way, the decision to automate that service should be re-visited and abstraction should be applied properly. Or, the service should not be automated at this time (it could always be covered in subsequent iterations). Provide a predictable cost model It’s simple. Provide a model that can convey the cost of a service given appropriate scale data is provided. This means that Automated Systems should account for and prevent runaway situations that result in cost overruns. Enable delivery of YOUR innovation to the market Throughout this article we’ve talked about a number of technical topics. But this truth is firmly rooted in the business space. When implemented correctly Automated Systems can serve as a competitive advantage by enabling delivery of innovation to market as fast as possible. Till next time Phew; we covered a lot today. This is a good start but there's more! Continue on to the following articles in this series as we dive into how Service, Deployment and Operational Models should be built. Next article: The Service Model for Cloud/Automated Systems Architectures404Views0likes0Comments