dccloud17
25 TopicsSuccessfully Deploy Your Application in the AWS Public Cloud: Part 1 of 4
In this series of articles, we're going to walk you through a fairly typical lift-and-shiftdeployment of BIG-IP in AWS, so that: If you’re just starting, you can get an idea of what lies ahead. If you’re already working in the cloud, you can get familiar with a variety of F5 solutions that will help your application and organization be successful. The scenario we've chosen is pretty typical: we have an application in our data center and we want to move it to the public cloud. As part of this move, we want to give development teams access to an agile environment, and we want to ensure that NetOps/SecOps maintains the stability and control they expect. Here is a simple diagram for a starting point. We’re a business that sells our products on the web. Specifically, we sell a bunch of random picnic supplies and candies. Our hot seller this summer is hotdog-flavored lemonade, something you might think is appalling but that really encompasses everything great about a picnic. But back to the scenario: We have a data center, where we have two physical BIG-IPs that function as a web application firewall (WAF), and they load balance traffic securely to three application servers. These application servers get their product information from a product database. Our warehouse uses a separate internal application to manage inventory, and that inventory is stored in an inventory database. In this series of articles, we’ll show you how to move the application to Amazon Web Services (AWS), and discuss the trade-offs that come at different stages in the process. So let’s get started. The challenge Move to the cloud; keep environments in sync The solution Use a CloudFormation Template (CFT) to create a repeatable cloud deployment We’ve been told to move to the cloud, and after a thorough investigation of the options, have decided to move our picnic-supply-selling app to Amazon Web Services. Our organization has several different environments that we maintain. Dev (one environment per developer) Test UAT Performance Production These environments can tend to be out of sync with one another. This frustrates everyone. And when we deploy the app to production, we often see unexpected results. If possible, we don’t want to bring this problem along to the cloud. We want to deploy our application to all of these environments and have the result be the same every time. Even if each developer has different code, all developers should be working in an infrastructure environment that matches all other environments, most importantly, production. Enter the AWS CloudFormation template. We can create a template and use it to consistently spin up the same environment. If we require a change, we can make the modification and save a new version of the CFT, letting everyone on the team know about the change. And it’s version-controlled, so we can always roll back if we mess up. So we use a CFT to create our application servers and deploy the latest code on them. In our scenario, we create an AWS Elastic Load Balancer so we can continue load balancing to the application servers. Our product data has a dependency on inventory data that comes from the warehouse, and we use BIG-IP for authentication (among other things). We use our on-premise BIG-IPs to create an IPSEC VPN tunnel to AWS. This way, our application can maintain a connection to the inventory system. When we get the CFT working the way we want, we can swing the DNS to point to these new AWS instances. Details about the CloudFormation template We put a CFT on github that you can deploy to demonstrate the AWS part of this setup. It may help you visualize this deployment, and in part 2 of this series, we'll be expanding on this initial setup. If you'd like, you can deploy by clicking the following button. Ensure that when you're in the AWS console, you select the region where you want to deploy. And if you're really new to this, just remember that active instances cost money. The CFT creates four Windows servers behind an AWS Elastic Load Balancer (ELB). Three of the servers are running a web app and one is used for the database. Beware, the website is a bit goofy and we were feeling punchy when we created it. Here is a brief explanation of what specific sections of the CFT do. Parameters The Parameters section includes fields you must populate when deploying the CFT. In this case, you’ll have to specify a name for your servers, and the AMI (Amazon Machine Image) ID to build the servers from. In the template, you can see what parameters look like. For example, the field where you enter the AMI ID: "WindowsAMI": { "Description": "Windows Version and Region AMI", "Type": "String", } To find the ID of the AMI you want to use, look in the marketplace, find the product you want, click the Manual Launch tab, and note the AMI ID for the region where you’re going to deploy. We are using Microsoft Windows Server 2016 Base and Microsoft Windows Server 2016 with MSSQL 2016. Note: These IDs can change; check the AWS Marketplace for the latest AMI IDs. Resources The resources section of the CFT performs the legwork. The CFT creates a Virtual Private Cloud (VPC) with three subnets so that the application is redundant across availability zones. It creates a Windows Server instance in each availability zone, and it creates an AWS Elastic Load Balancer (ELB) in front of the application servers. Code that creates the load balancer: "StackELB01": { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "Subnets" : [ { "Ref": "StackSubnet1" }, { "Ref": "StackSubnet2" }, { "Ref": "StackSubnet3" } ], "Instances": [ { "Ref": "WindowsInstance1" }, { "Ref": "WindowsInstance2" }, { "Ref": "WindowsInstance3" } ], "Listeners": [ { "LoadBalancerPort": "80", "InstancePort": "80", "Protocol": "HTTP" } ], "HealthCheck": { "Target": "HTTP:80/", "HealthyThreshold": "3", "UnhealthyThreshold": "5", "Interval": "30", "Timeout": "5" }, "SecurityGroups":[ { "Ref": "ELBSecurityGroup" } ] } Then the CFT uses Cloud-Init to configure the Windows machines. It installs IIS on each machine, sets the hostname, and creates an index.html file that contains the server name (so that when you load balance to each machine, you will be able to determine which app server is serving the traffic). It also adds your user to the machine’s local Administrators group. Note: This is just part of the code. Look at the CFT itself for details. "install_IIS": { "files": { "C:\\Users\\Administrator\\Downloads\\firstrun.ps1": { "content": { "Fn::Join": [ "", [ "param ( \n", " [string]$password,\n", " [string]$username,\n", " [string]$servername\n", ")\n", "\n", "Add-Type -AssemblyName System.IO.Compression.FileSystem\n", "\n", "## Create user and add to Administrators group\n", "$pass = ConvertTo-SecureString $password -AsPlainText -Force\n", "New-LocalUser -Name $username -Password $pass -PasswordNeverExpires\n", "Add-LocalGroupMember -Group \"Administrators\" -Member $username\n", The CFT then calls PowerShell to run the script. "commands": { "b-configure": { "command": { "Fn::Join": [ " ", [ "powershell.exe -ExecutionPolicy unrestricted C:\\Users\\Administrator\\Downloads\\firstrun.ps1", { "Ref": "adminPassword" }, { "Ref": "adminUsername"}, { "Ref": "WindowsName1"}, "\n" ] Finally, this section includes signaling. You can use the Cloud-Init cfn-signal helper script to pause the stack until resource creation is complete. For more information, see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-signal.html. Sample of signaling: "WindowsInstance1WaitHandle": { "Type": "AWS::CloudFormation::WaitConditionHandle" }, "WindowsInstance1WaitCondition": { "Type": "AWS::CloudFormation::WaitCondition", "DependsOn": "WindowsInstance1", "Properties": { "Handle": { "Ref": "WindowsInstance1WaitHandle" }, "Timeout": "1200" } } Outputs The output includes the URL of the AWS ELB, which you use to connect to your applications. "Outputs": { "ServerURL": { "Description": "The AWS Generated URL.", "Value": { "Fn::Join": [ "", [ "http://", { "Fn::GetAtt": [ "StackELB01", "DNSName" ] } This output is displayed in the AWS console, on the Outputs tab. You can use the link to quickly connect to the ELB. When we're done deploying the app and we’ve fully tested it in AWS, we can swing the DNS from the internal address to the AWS load balancer, and we’re up and running. Come back next week as we implement BIG-IP VE for security in AWS.470Views0likes0CommentsThe Hitchhiker’s Guide to BIG-IP in Azure – “High Availability”
Hello and welcome to the third installment of “The Hitchhiker’s Guide to BIG-IP in Azure”. In previous posts, (I assume you read and memorized them… right?), we looked at the Azure infrastructure and the many options one has for deploying a BIG-IP into Azure. Here’s some links to the previous posts in case you missed them; well worth the read. Let us now turn our attention to the next topic in our journey: high-availability. The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” A key to ensuring high-availability of your Azure-hosted application, (or any application for that matter), is making sure to eliminate any potential single points of failure. To that end, load balancing is typically used as the primary means to ensure a copy of the application is always reachable. This is one of the most common reasons for utilizing a BIG-IP. Those of us who have deployed the F5 BIG-IP in a traditional data center environment know that ensuring high-availability, (HA), is more than just having multiple pool members behind a single BIG-IP; it’s equally as important to ensure the BIG-IP does not represent a single point of failure. The same holds true for Azure deployments; eliminate single points of failure. While the theory is the same for both on-premises and cloud-based deployments, the process of deploying and configuring for HA is not. As you might recall from our first installment, due to infrastructure limitations common across public clouds, the traditional method of deploying the BIG-IP in an active/standby pair is not feasible. That’s ok; no need to search the universe. There’s an answer; and no, it’s not 42. - Sorry couldn’t help myself Active / Active Deployment “Say, since I have to have at least 2 BIG-IPs for HA, why wouldn’t I want to use both?” Well, for most cases, you probably would want to and can. Since the BIG-IP is basically another virtual machine, we can make use of various native Azure resources, (refer to Figure 1), to provide high availability. Availability Sets The BIG-IPs can be, (should be) placed in an availability set. BIG-IPs are located in separate fault and update domains ensuring local hardware fault tolerance. Azure Load Balancers The BIG-IP can be deployed behind and Azure load balancer to provide Active / Active high availability. It may seem strange to “load balance” a load balancer. However, it’s important to remember, the BIG-IP provides a variety of application services including WAF, Federation, SSO, SSL Offload, etc. This is in addition to traffic optimization and comprehensive load balancing. Azure Autoscale For increased flexibility with respect to performance, capacity, and availability BIG-IPs can be deployed into scale sets, (refer to Figure 2 below). By combining multiple public facing IP endpoints, interfaces, horizontal and vertical auto scaling it’s possible to efficiently run multiple optimized, secure, and highly available applications. Note: Currently, multiple BIG-IP instance deployments, (including scale sets), must be deployed programmatically, typically via an ARM template. Here’s the good news; F5 has several ARM templates available on GitHub at https://github.com/F5Networks/f5-azure-arm-templates. Active / Standby Deployment with Public Endpoint Migration As I just mentioned, in most cases an active/active deployment is preferred. However, there may be stateful applications that still require load balancing mechanisms beyond an Azure load balancer’s capability. Thanks to the guys in product development, there’s an experimental ARM template available on GitHub for deploying a pair of Active/Standby BIG-IPs. This deployment option mimics F5’s traditional on-premises model, (thanks again Mike Shimkus). Global High Availability With data centers literally located all over the world, it’s possible to place your application close to the end user wherever they might be located. By incorporating BIG-IP DNS, (formerly GTM), applications can be deployed globally for performance as well as availability. Users can be directed to appropriate application instance. In the event an application becomes unavailable or overloaded, users will be automatically redirected to a secondary subscription or region. This can be implemented down to a specific virtual server. All other unaffected traffic will still be sent to the desired region. Well friends, that’s it for this week. Stay tuned for next week when we take a look at life cycle management. Or would you prefer some Vogon poetry? Additional Links: The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” BIG-IP in Azure? Are You Serious? F5 Networks GitHub Overview of Autoscale in Microsoft Azure Virtual Machines, Cloud Services, and Web Apps Understand the structure and syntax of Azure Resource Manager templates Deploying BIG-IP Virtual Edition in Azure4.4KViews0likes5CommentsBIG-IP deployments using Ansible in private and public cloud
F5 has been actively developing Ansible modules that help in deploying an application on the BIG-IP. For a list of candidate modules for Ansible 2.4 release refer to the Github link. These modules can be used to configure any BIG-IP (physical/virtual) in any environment (Public/Private or Hybrid cloud) Before we can use the BIG-IP to deploy an application, we need to spin up a virtual edition of the BIG. Let’s look at some ways to spin up a BIG-IP in the Public and Private cloud Private cloud Create a BIG-IP guest VM through VMware vSphere For more details on the ansible module refer to Ansible documentation Pre-condition: On the VMware a template of the BIG-IP image has been created Example Playbook: - name: Create VMware guest hosts: localhost connection: local become: true tasks: - name: Deploy BIG-IP VE vsphere_guest: vcenter_hostname: 10.192.73.100 //vCenter hostname or IP address esxi: datacenter: F5 BD Lab //Datacenter name hostname: 10.192.73.22 //esxi hostname or IP address username: root //vCenter username password: ***** //vCenter password guest: “BIGIP-VM” //Name of the BIG-IP to be created from_template: yes template_src: "BIG-IP VE 12.1.2.0.0.249-Template" //Name of the template Spin up a BIG-IP VM in VMWARE using govc For more details on the govc refer to govc github and vmware github Pre-condition: govc has been installed on the ansible host Example Playbook: - name: Create VMware guest hosts: localhost connection: local tasks: - name: Import OVA and deploy BIG-IP VM command: "/usr/local/bin/govc import.ova -name=newVM BIGIP005 /tmp/BIGIP-12.1.2.0.0.249.LTM-scsi.ova" //Command to import the BIG-IP ova file environment: GOVC_HOST: "10.192.73.100" //vCenter hostname or IP address GOVC_URL: "https://10.192.73.100/sdk" GOVC_USERNAME: "root" //vCenter username GOVC_PASSWORD: "*******" //vCenter password GOVC_INSECURE: "1" GOVC_DATACENTER: "F5 BD Lab" //Datacenter name GOVC_DATASTORE: "datastore1 (5)" //Datastore on where to store the ova file GOVC_RESOURCE_POOL: "Testing" //Resource pool to use - name: Power on the VM command: "/usr/local/bin/govc vm.power -on newVM-BIGIP005" environment: GOVC_HOST: "10.192.73.100" GOVC_URL: "https://10.192.73.100/sdk" GOVC_USERNAME: "root" GOVC_PASSWORD: "vmware" GOVC_INSECURE: "1" GOVC_DATACENTER: "F5 BD Lab" GOVC_DATASTORE: "datastore1 (5)" GOVC_RESOURCE_POOL: "Testing" Public Cloud Spin up a BIG-IP using cloud formation templates in AWS For more details on the BIG-IP cloud formation templates, refer to the following Github Page Pre-condition: Cloud formation JSON template has been downloaded to the Ansible host Example Playbook: - name: Launch BIG-IP CFT in AWS hosts: localhost gather_facts: false tasks: - name: Launch BIG-IP CFT cloudformation: aws_access_key: "******************" //AWS access key aws_secret_key: "******************" //AWS secret key stack_name: "StandaloneBIGIP-1nic-experimental-Ansible" state: "present" region: "us-west-2" disable_rollback: true template: "standalone-hourly-1nic-experimental.json" //JSON blob for the CFT template_parameters: //template parameters availabilityZone1: "us-west-2a" sshKey: "bigip-test" validate_certs : false register: stack - name: Get facts(IP-address) from a cloud formation stack cloudformation_facts: aws_access_key: "*****************" aws_secret_key: "*****************" region: "us-west-2" stack_name: "StandaloneBIGIP-1nic-experimental-Ansible" register: bigip_ip_address - set_fact: //Extract the BIG-IP MGMT IP address ip_address: "{{ bigip_ip_address['ansible_facts']['cloudformation']['StandaloneBIGIP-1nic-experimental-Ansible']['stack_outputs']['Bigip1subnet1Az1SelfEipAddress']}}" - copy: //Copy the BIG-IP MGMT IP address to a file content: "bigip_ip_address: {{ ip_address}}" dest: "aws_var_file.yaml" //Copied IP address can be be referenced from file mode: 0644 Above mentioned are few ways to spin up a BIG-IP Virtual edition in your private/public cloud environment. Once the BIG-IP is installed then use the F5 ansible modules to deploy the application on the BIG-IP. Refer to DevCentral article to learn more about ansible roles and how we can use roles to onboard and network a BIG-IP. Included is a simple playbook that you can download and run against the BIG-IP. - name: Onboarding BIG-IP hosts: bigip //bigip variable should be present in the ansible inventory file gather_facts: false tasks: - name: Configure NTP server on BIG-IP bigip_device_ntp: server: "<bigip_ip_address >" user: "admin" password: "admin" ntp_servers: "172.2.1.1" validate_certs: False delegate_to: localhost - name: Configure BIG-IP hostname bigip_hostname: server: "<bigip_ip_address >" user: "admin" password: "admin" validate_certs: False hostname: "bigip1.local.com" delegate_to: localhost - name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "<bigip_ip_address >" user: "admin" password: "admin" banner: "enabled" banner_text: "Welcome- CLI username/password to login " validate_certs: False delegate_to: localhost - name: Manage BIG-IP DNS settings bigip_device_dns: server: "<bigip_ip_address >" user: "admin" password: "admin" name_servers: "172.2.1.1" search: "localhost" ip_version: "4" validate_certs: False delegate_to: localhost For more information on BIG-IP ansible playbooks visit the following github link784Views0likes2CommentsDevCentral Cloud Month Wrap
Is it the end of June already? At least it ended on a Friday and we can close out DevCentral’s Cloud Month followed by the weekend! First, huge thanks to our Cloud Month authors: Suzanne, Hitesh, Greg, Marty and Lori. Each delivered an informative series (23 articles in all!) from their area of expertise and the DevCentral team appreciates their involvement. We hope you enjoyed the content as much as we enjoyed putting it together. And with that, that’s a wrap for DevCentral Cloud Month. You can check out the original day-by-day calendar and below is each of the series if you missed anything. Thanks for coming by and we’ll see you in the community. AWS - Suzanne & Thomas Successfully Deploy Your Application in the AWS Public Cloud Secure Your New AWS Application with an F5 Web Application Firewall Shed the Responsibility of WAF Management with F5 Cloud Interconnect Get Back Speed and Agility of App Development in the Cloud with F5 Application Connector Cloud/Automated Systems – Hitesh Cloud/Automated Systems need an Architecture The Service Model for Cloud/Automated Systems Architecture The Deployment Model for Cloud/Automated Systems Architecture The Operational Model for Cloud/Automated Systems Architecture Azure – Greg The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – ‘Deployment Scenarios’ The Hitchhiker’s Guide to BIG-IP in Azure – ‘High Availability’ The Hitchhiker’s Guide to BIG-IP in Azure – ‘Life Cycle Management’ Google Cloud – Marty Deploy an App into Kubernetes in less than 24 Minutes Deploy an App into Kubernetes Even Faster (Than Last Week) Deploy an App into Kubernetes Using Advanced Application Services What’s Happening Inside My Kubernetes Cluster? F5 Friday #Flashback – Lori Flashback Friday: The Many Faces of Cloud Flashback Friday: The Death of SOA Has (Still) Been Greatly Exaggerated Flashback Friday: Cloud and Technical Data Integration Challenges Waning Flashback Friday: Is Vertical Scalability Still Your Problem? Cloud Month Lightboard Lesson Videos – Jason Lightboard Lessons: BIG-IP in the Public Cloud Lightboard Lessons: BIG-IP in the private cloud #DCCloud17 X-Tra! BIG-IP deployments using Ansible in private and public cloud The Weeks DevCentral Cloud Month - Week Two DevCentral Cloud Month - Week Three DevCentral Cloud Month - Week Four DevCentral Cloud Month - Week Five DevCentral Cloud Month Wrap ps242Views0likes0CommentsCloud Month on DevCentral
#DCCloud17 The term ‘Cloud’ as in Cloud Computing has been around for a while. Some insist Western Union invented the phrase in the 1960s; others point to a 1994 AT&T ad for the PersonaLink Services; and still others argue it was Amazon in 2006 or Google a few years later. And Gartner had cloud computing at the top of their Hype Cycle in 2009. No matter the birth year, cloud computing has become an integral part of an organization’s infrastructure and is not going away anytime soon. A 2017 SolarWinds IT Trends report says 95% of businesses have migrated critical applications to the cloud and F5's SOAD report notes that 20% of organizations will have over half their applications in the cloud this year. It is so critical that we’ve decided to dedicate the entire month of June to the Cloud. We’ve planned a cool cloud encounter for you this month. We’re lucky to have many of F5’s cloud experts offering their 'how-to' expertise with multiple 4-part series. The idea is to take you through a typical F5 deployment for various cloud vendors throughout the month. Mondays, we got Suzanne Selhorn & Thomas Stanley covering AWS; Wednesdays, Greg Coward will show how to deploy in Azure; Thursdays, Marty Scholes walks us through Google Cloud deployments including Kubernetes. But wait, there’s more! On Tuesdays, Hitesh Patel is doing a series on the F5 Cloud/Automation Architectures and how F5 plays in the Service Model, Deployment Model and Operational Model - no matter the cloud and on F5 Friday #Flashback starting tomorrow, we’re excited to have Lori MacVittie revisit some 2008 #F5Friday cloud articles to see if anything has changed a decade later. Hint: It has…mostly. In addition, I’ll offer my weekly take on the tasks & highlights that week. Below is the calendar for DevCentral's Cloud Month and we’ll be lighting up the links as they get published so bookmark this page and visit daily! Incidentally, I wrote my first cloud tagged article on DevCentral back in 2009. And if you missed it, Cloud Computing won the 2017 Preakness. Cloudy Skies Ahead! June 2017 Monday Tuesday Wednesday Thursday Friday 28 29 30 31 1 Cloud Month on DevCentral Calendar 2 Flashback Friday: The Many Faces of Cloud Lori MacVittie 3 4 5 Successfully Deploy Your Application in the AWS Public Cloud Suzanne Selhorn 6 Cloud/Automated Systems need an Architecture Hitesh Patel 7 The Hitchhiker’s Guide to BIG-IP in Azure Greg Coward 8 Deploy an App into Kubernetes in less than 24 Minutes Marty Scholes 9 Flashback Friday: The Death of SOA Has (Still) Been Greatly Exaggerated -Lori 10 11 12 Secure Your New AWS Application with an F5 Web Application Firewall -Suzanne 13 The Service Model for Cloud/Automated Systems Architecture -Hitesh DCCloud17 X-tra! BIG-IP deployments using Ansible in private and public cloud 14 The Hitchhiker’s Guide to BIG-IP in Azure – ‘Deployment Scenarios’ -Greg DCCloud17 X-tra! LBL Video:BIG-IP in the Public Cloud 15 Deploy an App into Kubernetes Even Faster (Than Last Week) -Marty 16 Flashback Friday: Cloud and Technical Data Integration Challenges Waning -Lori 17 18 19 Shed the Responsibility of WAF Management with F5 Cloud Interconnect -Suzanne 20 The Deployment Model for Cloud/Automated Systems Architecture -Hitesh 21 The Hitchhiker’s Guide to BIG-IP in Azure – ‘High Availability’ -Greg DCCloud17 X-tra! LBL Video: BIG-IP in the Private Cloud 22 Deploy an App into Kubernetes Using Advanced Application Services -Marty 23 Flashback Friday: Is Vertical Scalability Still Your Problem? -Lori 24 25 26 Get Back Speed and Agility of App Development in the Cloud with F5 Application Connector -Suzanne 27 The Operational Model for Cloud/Automated Systems Architecture -Hitesh 28 The Hitchhiker’s Guide to BIG-IP in Azure – ‘Life Cycle Management’ -Greg 29 What’s Happening Inside My Kubernetes Cluster? -Marty 30 Cloud Month Wrap! Titles subject to change...but not by much. ps285Views0likes0CommentsWhat's Happening Inside my Kubernetes Cluster?
Introduction This article series has taken us a long way. We started with an overview of Kubernetes. In the second week we deployed complex applications using Helm, visualized complex applications using Yipee.io. During the third week we enabled advanced application delivery services for all of our pods. In this fourth and final week, we are going to gain visibility into the components of the pod. Specifically, we are going to deploy a microservice application, consisting of multiple pods, and aggregate all of the logs into a single pane of glass using Splunk. You will be able to slice and dice the logs any number of ways to see exactly what is happening down at the pod level. To accomplish visibility, we are going to do four things: Deploy a microservices application Configure Splunk Cloud Configure Kubernetes to send logs to Splunk Cloud Visualize the logs Deploy a Microservices Application As in previous articles, this article will take place using Google Cloud. Log into the Google Cloud Console and create a cluster. Once done, open a Google Cloud Shell session. Fortunately, Eberhard Wolff has already assembled a simple microservices application. First, set the credentials. gcloud container clusters get-credentials cluster-1 --zone us-central1-a We simply need to download the shell script. wgethttps://raw.githubusercontent.com/ewolff/microservice-kubernetes/master/microservice-kubernetes-demo/kubernetes-deploy.sh Next, simply run the shell script. This mayl take several minutes to complete. bash ./kubernetes-deploy.sh Once finished, check to see that all of the pods are running. You will see the several pods that comprise the application. Many of the pods provide small services (microservices) to other pods. kubectl get pods If that looks good, find the external IP address of the Apache service. Note that the address may be pending for several minutes. Run the command until a real address is shown. kubectl get svc apache Put the external IP address into your browser. The simple application have several functioning components. Feel free to try each of the functions. Every click will generate logs that we can analyze later. That was it. You now have a microsevices application running in Kubernetes. But which component is processing what traffic? If there are any slowdowns, which component is having problems? Configure Splunk Cloud Splunk is a set of tools for ingesting, processing, searching, and analyzing machine data. We are going to use it to analyze our application logs. Splunk comes in many forms, but for our purposes, the free trial of the cloud (hosted) service will work perfectly. Go tohttps://www.splunk.com/en_us/download.htmlthen select the Free Cloud Trial. Fill out the form. The form may take a while to process. View the instance. Finally, accept the terms. You now have a Splunk instance for free for the next 15 days. Watch for an email from Splunk. Much of the Splunk and Kubernetes configuration steps are fromhttp://jasonpoon.ca/2017/04/03/kubernetes-logging-with-splunk/. When the email from Splunk arrives, click on the link. This is your private instance of Splunk Cloud that has a lot of sample records. To finish the configuration, first let Splunk know that you want to receive records from a Universal Forwarder, which is Splunk-speak for an external agent. In our case, we will be using the Universal Forwarder to forward container logs from Kubernetes. To configure Splunk, click to choose a default dashboard. Select Forwarders: Deployment. You will be asked to set up forwarding. Click to enable forwarding. Click Enable. Forwarding is configured. Next we need to download the Splunk credentials. Go back to the link supplied in the email, and click on the Universal Forwarder on the left pane. Download Universal Forwarder credentials. We need to get this file to the Google Cloud Shell. One way to do that is to create a bucket in Google Storage. On the Google Cloud page, click on the Storage Browser. Create a transfer bucket. You will need to pick a name unique across Google. I chose mls-xfer. After typing in the name, click Create. Next, upload the credentials file from Splunk by clicking Upload Files. That’s all we need from Splunk right now. The next step is to configure Kubernetes to send the log data to Splunk. Configure Kubernetes to Send Logs to Spunk Cloud In this section we will configure Kubernetes to send container logs to Splunk for visualization and analysis. Go to Google Cloud Shell to confirm the Splunk credential file is visible. Substitute your bucket name for mls-xfer. gsutil ls gs://mls-xfer If you see the file, then you can copy it to the Google Cloud Shell. Again, use your bucket name. Note the trailing dot. gsutil cp gs://mls-xfer/splunkclouduf.spl . If successful, you will have the file in the Google Cloud Shell where you can extract it. tar xvf ./splunkclouduf.spl You should see the files being extracted. splunkclouduf/default/outputs.conf splunkclouduf/default/cacert.pem splunkclouduf/default/server.pem splunkclouduf/default/client.pem splunkclouduf/default/limits.conf Next we need to build a file to deploy the Splunk forwarder. kubectl create configmap splunk-forwarder-config --from-file splunkclouduf/default/ --dry-run -o yaml > splunk-forwarder-config.yaml Before using that file, we need to add some lines near the end of it, after the last certificate. inputs.conf: | # watch all files in [monitor:///var/log/containers/*.log] # extract `host` from the first group in the filename host_regex = /var/log/containers/(.*)_.*_.*\.log # set source type to Kubernetes sourcetype = kubernetes Spaces and columns are important here. The last few lines of my splunk-forwarder-config.yaml file look like this: -----END ENCRYPTED PRIVATE KEY----- inputs.conf: | # watch all files in [monitor:///var/log/containers/*.log] # extract `host` from the first group in the filename host_regex = /var/log/containers/(.*)_.*_.*\.log # set source type to Kubernetes sourcetype = kubernetes kind: ConfigMap metadata: creationTimestamp: null name: splunk-forwarder-config Create the configmap using the supplied file. kubectl create -f splunk-forwarder-config.yaml The next step is to create a daemonset, which is a container that runs on every node of the cluster. Copy and paste the below text into a file named splunk-forwarder-daemonset.yaml using vi or your favorite editor. apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: splunk-forwarder-daemonset spec: template: metadata: labels: app: splunk-forwarder spec: containers: - name: splunkuf image: splunk/universalforwarder:6.5.2-monitor env: - name: SPLUNK_START_ARGS value: "--accept-license --answer-yes" - name: SPLUNK_USER value: root volumeMounts: - mountPath: /var/run/docker.sock readOnly: true name: docker-socket - mountPath: /var/lib/docker/containers readOnly: true name: container-logs - mountPath: /opt/splunk/etc/apps/splunkclouduf/default name: splunk-config - mountPath: /var/log/containers readOnly: true name: pod-logs volumes: - name: docker-socket hostPath: path: /var/run/docker.sock - name: container-logs hostPath: path: /var/lib/docker/containers - name: pod-logs hostPath: path: /var/log/containers - name: splunk-config configMap: name: splunk-forwarder-config Finally, create the daemonset. kubectl create -f splunk-forwarder-daemonset.yaml The microservice app should be sending logs right now to your Splunk Cloud instance. The logs are updated every 15 minutes, so it might be a while before the entries show in Splunk. For now, explore the micro services ordering application so that log entries are generated. Feel free also to explore Splunk. Visualize the Logs Now that logs are appearing in Splunk, go to the link on the email from Splunk. You should see a dashboard with long entries representing activity in the order processing application. Immediately on the dashboard you can see several options. You can drill down the forwarders by status. Further down the page you can see a list of the forwarding instances, along with statistics. Below that is a graph of activity across the instances. Explore the Splunk application. The combination of logs from several pods provides insight into the activity among the containers. Clean Up When you aredone exploring, cleaning up is a breeze. Just delete the cluster. Conclusion This article series helped scratch the surface of what is possible with Kubernetes. Even if you had no Kubernetes knowledge, the first article gave an overview and deployed a simple application. The second article introduced two ways to automate deployments. The third article showed how to integrate application delivery services. This article closed the loop by demonstrating application monitoring capabilities. You are now in a position to have a meaningful conversation about Kubernetes with just about anyone, even experts. Kubernetes is growing and changing quickly, and welcome to the world of containers. Series Index Deploy an App into Kubernetes in less than 24 Minutes Deploy an App into Kubernetes Even Faster (Than Last Week) Deploy an App into Kubernetes Using Advanced Application Services What's Happening Inside my Kubernetes Cluster?791Views0likes0CommentsThe Hitchhiker’s Guide to BIG-IP in Azure – “Life Cycle Management”
Hello fellow travelers and welcome to the fourth and final installment of “The Hitchhiker’s Guide to BIG-IP in Azure”. In the spirit of teamwork, (and because he’s an even bigger Sci-fi nerd than me), I’ve asked my colleague, Patrick Merrick , to provide the commentary for this final installment. Take it away Patrick! Hi travelers! No doubt you have been following the evolution of this blog series as Greg navigated Azure specific topics describing how cloud based services differ from our traditional understanding of how we position the BIG-IP in an on premises’ deployment. If not I have provided the predecessors to this post below for your enjoyment! The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” The Hitchhiker’s Guide to BIG-IP in Azure – “High Availability” To carry on the theme, I have decided to also take a page from the legendary author Douglas Adams to help explain F5’s position on life cycle management. Life cycle management historically can be likened to the Infinite Improbability Drive. Regardless of best intentions, you rarely end up in the space the you had intended, but generally where you needed to be. For those of you who are not “in the know”, I have left a brief description of said improbability drive below. “The infinite improbability drive is a wonderful new method of crossing interstellar distances in a mere nothing of a second, without all that tedious mucking about in hyperspace. It was discovered by lucky chance, and then developed into a governable form of propulsion by the Galactic Government's research centre on Damogran.” - Douglas Adams, “The Hitchhiker's Guide to the Galaxy” In my previous life, I was a consultant and have had the duty of integrating solutions into previously architected infrastructures without causing disruption to end users. In this “new’ish” world of Dev Ops or “Life at cloud speed”, we are discovering that Life Cycle management isn’t necessarily tied to major releases and minor updates. With that said let’s dispense with the Vogon bureaucratic method, grab our towels and wade into deep water. “According to the Guide, the Vogons are terribly bureaucratic and mean. They're not going to break the rules in order to help you. On the other hand, they're not exactly evil—they're not going to break the rules in order to harm you, either. Still, it may be hard to remember that when you're being chased by the Ravenous Bugblatter Beast of Traal while the Vogons are busy going through the appropriate forms” - Douglas Adams, “The Hitchhiker's Guide to the Galaxy” Azure Instance Type Upgrades As you have come to expect F5 has published recommendations for configuring your instance in Azure, Your instance configuration will rely largely on what modules you would like to provision in your infrastructure, but this topic is well covered in the following link BIG-IP® Virtual Edition and Microsoft Azure as always what is not “YET” covered in the deployment guide can likely be found on DevCentral. If you were to find yourself in a scenario where you need to manipulate an instance of BIG-IP this process has been well documented by TechNet How to: Change the Size of a Windows Azure Virtual Machine and can be achieved by utilizing the following mechanisms. Azure Management Portal There is little bureaucracy from the management portal aside from logging in and choosing your desired settings. whether you are looking to increase cores or memory and then ultimately choosing the ‘Save’ button you are well served here. PowerShell Script One could argue that there is a bit more Vogon influence here, but I would contest that your flexibility from the programmatic perspective is significantly more robust. Aside from being confined by PowerShell parameters and variables, but also well outlined in the TechNet article above. BIG-IP OS Upgrades More good news! But first another Douglas Adams quote. “There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.” - Douglas Adams “The Hitchhiker's Guide to the Galaxy” Upgrading a BIG-IP in Azure is no different than updating any VE or our physical appliances for that matter. Download the ISO and MD5 files. Install the downloaded files to an inactive boot location. Boot the BIG-IP VE to the new boot location. Tip: If there is a problem during installation, you can use log messages to troubleshoot a solution. The system stores the installation log file as /var/log/liveinstall.log. If you are new to this process, more detailed information can be found by reviewing yet another knowledge center article; Updating and Upgrading BIG-IP VE. Utilize traditional recommended best practices I don’t normally start paragraphs off with a quote but when I do its Douglas Adams. “You know,” said Arthur, “it’s at times like this, when I’m trapped in a Vogon airlock with a man from Betelgeuse, and about to die of asphyxiation in deep space that I really wish I’d listened to what my mother told me when I was young.” “Why, what did she tell you?” “I don’t know, I didn’t listen.” - Douglas Adams, “The Hitchhiker's Guide to the Galaxy” Before attempting any of the aforementioned solutions please be sure that you have a valid backup of your configuration Backing up your BIG-IP system configuration. A/S upgrade In this scenario, you would have a device group that also has ConfigSync enabled. This is a high-availability feature that synchronizes configuration changes from one BIG-IP to the other This feature ensures that the BIG-IP device group members maintain the same configuration data and work in tandem to more efficiently process application traffic. At a high level, we will start with the passive node first and use the following steps to accomplish this task. More detailed information can be found by reviewing the following article Introduction to upgrading version 11.x, or later, BIG-IP software. Preparing BIG-IP modules for an upgrade Preparing BIG-IP device groups for an upgrade Upgrading each device within the device group Changing states of the traffic groups Configuring HA groups (if applicable) Configuring module-specific settings Verifying the software upgrade for the device group Additional Links: The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” he Hitchhiker’s Guide to BIG-IP in Azure – “High Availability” BIG-IP in Azure? Are You Serious? F5 Networks GitHub Understand the structure and syntax of Azure Resource Manager templates Deploying BIG-IP Virtual Edition in Azure BIG-IP Systems: Upgrading Software728Views0likes0CommentsThe Operational Model for Cloud/Automated Systems Architectures
Recap Previous Article: The Deployment Model for Cloud/Automated Systems Architectures Ok. We're almost there. The finish line is in sight! Last week we covered our Deployment Model. This week we will wrap up this series of articles by covering our Operational Model and our Conclusion. Operational Model So, we've covered how we use our Service Model to define Appropriately Abstracted L4-7 services. We then use our Deployment Model to deploy those Services into various Environments using DevOps Continuous Deployment and Continuous Improvement methodologies. One way to think about the timeline is the following: Service Model: Day < 0 Deployment Model: Day 0 Operational Model: Day 0+n Our Operational Model is focused on Operating Automated Systems in a safe, production manner. From a consumer perspective the Operational Model is, in many ways, the most critical because it's being interacted with on a regular basis; whichties in with our previous definition: "Provides stable and predictable workflows for changes to production environments, data and telemetry gathering and troubleshooting." Following the pattern from our previous article lets cover the Truths, Attributes and an F5 Expression of the Operational Model. Operational Model Truths Lets take a look at the Truths for the Operational Model: Support Mutability from Triggered and Sensed Metrics Mutability Must Consume a Source of Truth Bound Elasticity within the capabilities of Deployment Model Scalability attributes Enable Break/Fix & Troubleshoot Operations Provide Analytics & Visibility Support Mutability from Triggered or Sensed Metrics In the previous articles we mentioned the term Mutate (or in this context Mutability) a few times. Mutate, mutations and mutability are all ways of saying changes to a Service Deployment in an Environment. These changes can be large, such as deploying a Service in a new Environment; or small, such as updating the Server IP's contained within a Pool or resources. Triggered Mutability is predicated on a system outside of the vendor specific automation framework directing a Mutation of a Service. These mutation actions can be triggered by either other Automated Systems or by Humans. Sensed Mutability is predicated on the vendor specific automation framework sensing that a change to the Service Deployment is required and effecting the changein an automated fashion. When implementing our Operational Model it is critical that we define which mutations of the Service are Triggered or Sensed. Furthermore, the model should consume as many Sensed Mutations as possible. Mutability Must Consume a Source of Truth Mutations of a Service outside of the Source of Truth (Out-of-band, or, OOB) result in a fundamental problem in Computer Science called the Consensus Problem. This problem is defined as: "The consensus problem requires agreement among a number of processes (or agents) for a single data value. Some of the processes (agents) may fail or be unreliable in other ways, so consensus protocols must be fault tolerant or resilient. The processes must somehow put forth their candidate values, communicate with one another, and agree on a single consensus value." [1] This truth can be summed up simply: "No Out-of-Band Changes. Ever!" When OOB changes occur in most Environments it is not possible to reach consensus in an automated fashion. This results in a human having to act as the arbiter of all disputes, and, can have massive impacts on the reliability of the system. To avoid this issue we must drive all Operational Mutations through a Source of Truth so the system remains in a consistent state. References: [1] https://en.wikipedia.org/wiki/Consensus_(computer_science) Bound Elasticity within the capabilities of Deployment Model Scalability attributes In our previous article we discussed the Mutable Scalability attribute of the Deployment Model. One of the key desirable attributes is the ability to the scale infrastructure resources Elastically with user load. It's important to understand that Elasticity is an Operational Mutation of the underlying Scalability attribute of an Environment; therefore, we must bound our expression of Elasticitywithin the capabilities of the Mutable Scalability attribute in the Deployment Model. Enable Break/Fix & Troubleshoot Operations One of the critical decisions that must be made when designing Automated Systems is how anomolous operations can be identified and resolved. A good analogy to use here is the modern airliner. Both Boeing and Airbus produce safe, efficient and reliable airplanes; however, there is a critical difference in how Boeing and Airbus design their control systems. Boeing designs its control systems on the premise that the pilotisalways in charge; they have as-direct-as-possible control over the planes flight envelope. This includes allowing the pilots to control the plane in a way that may be deemed as exceeding the limits of its design. Airbus, on the other hand, designs its control systems on the idea that the pilotinputs are an input to an automated system. This system then derives decisions on how to drive the control surfaces of the plane based on pilot and other inputs. The system is designed to prevent or filter out pilot input that exceeds the designed safety limits of the plane. Personal opinions aside, in an emergency scenario, there is not necessarily any right answer on which system can overcome an anomaly. The focus is instead on training the pilots to understand how to interact with the underlying system and resolve the issue. For this Architecture we've picked the 'Boeing' model. The reason behind this is that a reliable model for determining the 'flight envelope' does not always exist. Without this it is not possible to predictably provide the correct resolution for an anomoly (which is what the Airbus model requires). We have purposely designed our systems to give the operator FULL control of the system at all times. The caveat here is that you should either drive change through a Source of Truth OR disable automation until an issue is resolved. Provide Analytics & Visibility All good Operational Models are predicated on the ability to monitor the underlying system in a concise and efficient manner. This visibility needs to be more than just a stream of log data. The Automated System should properly identify relevant data and surface that information as needed to inform automated or manual Operational Mutations. This data should be analyzed over time to provide insights into how the system operatesas a whole. This data is then used to help form our Continuous Improvement feedback loops. resulting in the ability to iterate our models over time. Operational Model Attributes Now that we've covered our truths lets take a look at our Attributes: Mutability Source of Truth Elasticity Continuous Ops Analytics & Visibility Mutability "Use the inherent toolchain available to the deployment" To implement Operational Mutability we should always use the underlying toolchain in the Deployment Model. This means that the Operators in our environment should understand the toolchain used in the Deployment Model and how they can interact with it in a safe, reliable manner. Source of Truth We've discussed Source of Truth quite a bit. We include this item as an Attribute to reinforce that "Operational changes should be driven from Source of Truth" Elasticity "Elasticity can be Triggered or Sensed, however, must be bound by the Deployment Model" Building off the explanation in our Truths section: We could implement two different types of Scalability in the Deployment Model: Service Level: Consume a elastic scale mutation of compute resources by adding Pool Members to a Pool Environment Level: Scale BIG-IP instances elastically based on requests per second to a large web app. If we've only implemented Service Level Elasticity then our Operational Model should reflect that we only allow operational mutations at the Service Level. Continuous Ops "Always Fail Forward" What does this mean? Lets looks at it's complementary definition: "Don't roll back!"! Uncomfortable yet? Most people are!The idea behind a "Fail Forward" methodology is that issues should always be resolved in a forward manner that leverages automation. The cumulative effect of years of 'roll back' operational methodology is that most infrastructures are horribly behind in various areas (software/firmware versions, best practice config, security patches, etc.) A Fail Forward methodology allows us Continuously Improve andContinually deliver innovation to the market. Analytics & Visibility We covered most of the details in our Truths. This attribute serves a reminder that without Analytics & Visibility we cannot effectively implement Continous Improvement for our Model and the overall Architecture. Operational Model - F5 Expression This final slide shows an example of how to implement all the Attributes of the Operational Model using F5 technology. As we've discussed, it's not required to implement every attribute in the first iteration. The slide references some F5 specific technology such as iApp's, iWorkflow (iWf), etc. For context here are links to more documentation for each tool: iApps: https://devcentral.f5.com/s/iapps iWorkflow: https://devcentral.f5.com/s/iworkflow App Services iApp: https://devcentral.f5.com/s/wiki/iapp.appsvcsiapp_index.ashx Splunk iApp: https://devcentral.f5.com/s/articles/f5-analytics-iapp Conclusion We've laid a good foundation in this article series. Where do we go from here? Well, first, I would recommend taking some time to look at what you're trying to accomplish and fitting it into our various models. The best way to do this is to start with a blank slate. Take a look at our attribute slides and fill them in with what works for you problem set. Then take those attributes and validate them with our Architectural and Model Truths. After a couple of iterations a path forward should appear. At that point call out to your F5 account team and ask for a F5 Systems Engineer that specializes in Cloud. We've trained a global team of 150 SE's on this same material (using DevOps methodologies of course) and we are ready to help you move forward and leverage Automation to: Deliver YOUR innovation to the market Keep an eye on DevCentral in the coming weeks. We will be publishing articles that take this series one step further by showing Environment specific Implementations with technology partners such as OpenStack, Cisco ACI, vmWare ESX, Amazon AWS, Microsoft Azure and Google Cloud Platform. Thank you all for taking the time to read this series and we'll see you next time!370Views0likes0CommentsGet Back Speed and Agility of App Development in the Cloud with F5 Application Connector: Part 4 of 4
The challenge The speed and agility of the cloud is lost—dev must request environment changes from IT again. The solution Use the F5 Application Connector to automatically update the BIG-IP. In the last post, we showed how to get the stability and security of hosting BIG-IPs in the same data center as cloud servers (aka Cloud Interconnect). While this is a great solution, it re-created the problem of the dev team filing tickets with IT in order to move application servers to production. Enter the Application Connector from F5. With the Application Connector, any time you create or delete an application server in the cloud, the BIG-IPs automatically know about it, and update their configuration accordingly. And though the example below is talking about AWS, the application connector can be used in multiple clouds, helping prevent lock-in to any one cloud. The Application Connector is made up of two components: The Application Connector Proxy, which isdelivered as aDocker container that's deployed in a cloud environment The Application Connector Service Center, which isdeployed as an iAppsLX package on the BIG-IPs The Application Connector Proxy establishes an outbound connection to the BIG-IPs, using a secure TLS tunnel to encrypt traffic between the cloud app servers and BIG-IPs. In our example, we're showing the Application Connector in conjuction with Cloud Interconnect, but your BIG-IPs can be physical or virtual (aka BIG-IP VEs), andcan be on-premise or in a remote location. Auto-discovery of nodes As we said, after some initial configuration, the BIG-IPs are automatically updated with the latest nodes. In AWS, nodes are discovered and published automatically, and as of June, 2017, similar functionality is also planned for Azure and Google. With these functions, you eliminate the need for manual updates to the BIG-IP; developers no longer have to contact IT every time they add/remove cloud servers. In the following example, DevOps has chosen to disable two nodes in the Application Connector Proxy. This change is then reflected in the Application Connector Service Centeras well. The Application Connector Service Centerlets NetOps/SecOps access a full list of nodes and their statuses, no matter which cloud they are in. You can choose to disable automatic publishing to the BIG-IP, thus giving you the power to select which nodes you would like the BIG-IP to see. Scale out to other clouds You can now use multiple clouds and have BIG-IP automatically updated with all the nodes. Even if your IP address ranges overlap across multiple clouds, Application Connector handles it without issue. Security When you use the Application Connector, no public IP addresses need to be directly associated with the application servers. Because of this, the apps are hidden from clients and bad actors. Another security benefit it centralized encryption. Encryption keys no longer need to be stored in the cloud next to the application servers, but instead are stored on the BIG-IPs and can be shared across multiple clouds. Consistent Services & Policies When you're using the Application Connector, services configuration like load balancing, WAF, traffic manipulation, and authentication, as well as the policies that go with them, are all centrally managed on the BIG-IP by NetOps/SecOps/IT. Low Maintenance After the initial configuration of the Application Connector, no management or maintenance is necessary. It’s simpler than maintaining a VPN tunnel and it’s small—you don’t have to worry about it taking up too many resources. DevOps no longer has to request changes whenever they add/remove app servers. They can update the Application Connector proxy any time they choose. Get Started with Application Connector You probably want to get started, so here is a doc for you.https://support.f5.com/kb/en-us/products/app-connector/manuals/product/f5-application-connector-setup-config-1-1-0.html And when you’re ready, you can download the Application Connector from downloads.f5.com: https://downloads.f5.com/esd/product.jsp?sw=BIG-IP&pro=F5_Application_Connector And finally, in case you missed it, here are the previous posts: Successfully Deploy Your Application in the AWS Public Cloud: Part 1 of 4 Secure Your New AWS Application with an F5 Web Application Firewall: Part 2 of 4 Shed the Responsibility of WAF Management with F5 Cloud Interconnect: Part 3 of 4329Views1like0CommentsDevCentral Cloud Month - Week Five
What’s this week about? This is the final week of DevCentral’s Cloud Month so let’s close out strong. Throughout the month Suzanne, Hitesh, Greg, Marty and Lori have taken us on an interesting journey to share their unique cloud expertise. Last week we covered areas like high availability, scalability, responsibility, inter-connectivity and exploring the philosophy behind cloud deployment models. We also got a nifty Lightboard Lesson covering BIG-IP in the private cloud This week’s focus is on maintaining, managing and operating your cloud deployments. If you missed any of the previous articles, you can catch up with our Cloud Month calendar and we’ll wrap up DevCentral's Cloud Month on Friday. Thanks for taking the journey with us and hope it was educational, informative and entertaining! ps Related: Cloud Month on DevCentral DevCentral Cloud Month - Week Two DevCentral Cloud Month - Week Three DevCentral Cloud Month - Week Four246Views0likes0Comments