big-ip virtual edition
29 TopicsF5 in AWS Part 4 - Orchestrating BIG-IP Application Services with Open-Source tools
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following post references code hosted at F5's Github repository f5networks/aws-deployments. This code provides a demonstration of using open-source tools to configure and orchestrate BIG-IP. Full documentation for F5 BIG-IP cloud work can be found at Cloud Docs: F5 Public Cloud Integrations. So far we have talked above AWS networking basics, how to run BIG-IP in a VPC, and highly-available deployment footprints. In this post, we’ll move on to my favorite topic, orchestration. By this point, you probably have several VMs running in AWS. You’ve lost track of which configuration is setup on which VM, and you have found yourself slowly going mad as you toggle between the AWS web portal and several SSH windows. I call this ‘point-and-click’ purgatory. Let's be blunt, why would you move to cloud without realizing the benefits of automation, of which cloud is a large enabler. If you remember our second article, we mentioned CloudFormation templates as a great way to deploy a standardized set of resources (perhaps BIG-IP + the additional virtualized network resources) in EC2. This is a great start, but we need to configure these resources once they have started, and we need a way to define and execute workflows which will run across a set of hosts, perhaps even hosts which are external to the AWS environment. Enter the use of open-source configuration management and workflow tools that have been popularized by the software development community. Open-source configuration management and AWS APIs Lately, I have been playing with Ansible, which is a python-based, agentless workflow engine for IT automation. By agentless, I mean that you don’t need to install an agent on hosts under management. Ansible, like the other tools, provides a number of libraries (or “modules”) which provide the ability to manage a diverse collection of remote systems. These modules are typically implemented through the use of API calls, often over HTTP. Out of the box, Ansible comes with several modules for managing resources in AWS. While the EC2 libraries provided are useful for basic orchestration use cases, we decided it would be easier to atomically manage sets of resources using the CloudFormation module. In doing so, we were able to deploy entire CloudFormation stacks which would include items like VPCs, networking elements, BIG-IP, app servers, etc. Underneath the covers, the CloudFormation: Ansible module and our own project use the python module to interact with AWS service endpoints. Ansible provides some basic modules for managing BIG-IP configuration resources. These along with libraries for similar tools can be found here: Ansible Puppet SaltStack In the rest of this post, I’ll discuss some work colleagues and I have done to automate BIG-IP deployments in AWS using Ansible. While we chose to use Ansible, we readily admit that Puppet, Chef, Salt and whatever else you use are all appropriate choices for implementing deployment and configuration management workflows for your network. Each have their upsides and downsides, and different tools may lend themselves to different use cases for your infrastructure. Browse the web to figure out which tool is right for you. Using Standardized BIG-IP Interfaces Speaking of APIs, for years F5 has provided the ability to programmatically configure BIG-IP using iControlSOAP. As the audiences performing automation work have matured, so have the weapons of choice. The new hot ticket is REST (Representational State Transfer), and guess what, BIG-IP has a REST interface (you can probably figure out what it is called). Together, iControlSOAP and iControlREST give you the power to manage nearly every configuration element and feature of BIG-IP. These interfaces become extremely powerful when you combine them with your favorite open-source configuration management tool and a cloud that allows you to spin up and down compute and networking resources. In the project described below, we have also made use of iApps using iControlRest as a way to create a standard virtual server configuration with the correct policies and profiles. The documentation in Github describes this in detail, but our approach shows how iApps provide a strongly supported approach for managing network policy across engineering teams. For example, imagine that a team of software engineers has written a framework to deploy applications. You can package the network policy into iApps for various types of apps, and pass these to the teams writing the deployment framework. Implementing a Service Catalog To pull the above concepts together, a colleague and I put together the aws-deployments project.The goal was to build a simple service catalog which would enable a user to deploy a containerized application in EC2 with BIG-IP network services sitting in front. This is example code that is not supported by F5 support but is a proof of concept to show how you can fully automate production-like deployments in AWS. Some highlights of the project include: Use of iControlRest and iControlSoap within Ansible playbooks to setup advanced topologies of BIG-IP in AWS. Automated deployment of a basic ASM web application firewall policy to protect a vulnerable web app (Hackazon. Use of iApps to manage virtual server configurations, including the WAF policy mentioned above. Figure 1 - Generic Architecture for automating application deployments in public or private cloud In examination of the code, you will see that we provide the opportunity to provision all the development models outlined in our earlier post (a single standalone VE, standalones BIG-IP VEs striped availability zones, clusters within an availability zone, etc). We used Ansible and the interfaces on BIG-IP to orchestrate the workflows assoiated with these deployment models. To perform the clustering step, we have used the iControlSoap interface on BIG-IP. The final set of technology used is depicted in Figure 3. Figure 2 - Technologies used in the aws-deployments project on Github Read the Code and Test It Yourself All the code I have mentioned is available at f5networks/aws-deployments. We encourage you to download and run the code for yourself. Instructions for setting up a development environment which includes the necessary dependencies is easy. We have packaged all the dependencies for use with either Vagrant or Docker as development tools. The instructions for either of these approaches can be found in the README.md or in the /docs directory. The following video shows an end-to-end usage example. (Keep in mind that the code has been updated since this video was produced). At the end of the day, our goal for this work was to collect customer feedback. Please provide some by leaving a comment below, or by filing ‘pull requests’ or ‘issues’ in Github. In the next few weeks, we will be updating the project to include the Hackazon app mentioned above, show how to cluster BIG-IP across availability zones, and how to deploy an ASM profile with an iApp. Have fun!1.3KViews1like3CommentsF5 in AWS Part 5 - Cloud-init, Single-NIC, and Auto Scale Out in BIG-IP
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following article covers features and examplesin the 12.1 AWS Marketplace release, discussed in the following documentation: Amazon Web Services: Single NIC BIG-IP VE Amazon Web Services: Auto Scaling BIG-IP VE You can find the BIG-IP Hourly and BYOL releases in the Amazon marketplace here. BIG-IP utility billing images are available, which makes it a great time to talk about some of the functionality. So far in Chris’s series, we have discussed some of the highly-available deployment footprints of BIG-IP in AWS and how these might be orchestrated. Several of these footprints leverage BIG-IP's Device Service Clustering (DSC) technology for configuration management across devices and also lend themselves to multi-app or multi-tenant configurations in a shared-service model. But what if you want to deploy BIG-IP on a per-app or per-tenant basis, in a horizontally scalable footprint that plays well with the concepts of elasticity and immutability in cloud? Today we have just the option for you. Before highlighting these scalable deployment models in AWS, we need to cover cloud-init and single-NIC configurations; two important additions to BIG-IP that enable an Auto Scaling topology. Elasticity Elastiity is obviously one of the biggest promises/benefits of cloud. By leveraging cloud, we are essentially tapping into the "unlimited" (at least relative to our own datacenters) resources large cloud providers have built. In actual practice, this means adopting new methodologies and approaches to truely deliver this. Immutablity In traditional operational model of datacenters, everything was "actively" managed. Physical Infrastructure still tends to lend itself to active management but even virtualized services and applications running on top of the infrastructure were actively managed. For example, servers were patched, code was live upgraded inplace, etc. However, to achieve true elasticity, where things are spinning up or down and more ephemeral in nature, it required a new approach. Instead of trying to patch or upgrade individual instances, the approach was treating them as disposable which meant focusing more on the build process itself. ex. Netflix's famous Building with Legos approach. Yes, the concept of golden images/snapshots existed since virtualization but cloud, with self-service, automation and auto scale functionality, forced this to a new level. Operations focus shifted more towards a consistent and repeatable "build" or "packaging" effort, with final goal of creating instances that didn't need to be touched, logged into, etc. In the specific context of AWS's Auto Scale groups, that means modifying the Auto Scale Group's "launch config". The steps for creating the new instances involve either referencing an entirely new image ID or maybe modification to a cloud-init config. Cloud-init What is it? First, let’s talk about cloud-init as it is used with most Linux distributions. Most of you who are evaluating or operating in the cloud have heard of it. For those who haven’t, cloud-init is an industry standard for bootstrapping machines at startup. It provides a simple domain specific language for common infrastructure provisioning tasks. You can read the official docs here. For the average linux or systems engineer, cloud-init is used to perform tasks such as installing a custom package, updating yum repositories or installing certificates to perform final customizations to a "base" or “golden” image. For example, the Security team might create an approved hardened base image and various Dev teams would use cloud-init to customize the image so it booted up with an ‘identity’ if you will – an application server with an Apache webserver running or a database server with MySQL provisioned. Let’s start with the most basic "Hello World" use of cloud-init, passing in User Data (in this case a simple bash script). If launching an instance via the AWS Console, on the Configure Instance page, navigate down the “Advanced Details”: Figure 1: User Data input field - bash However, User Data is limited to < 16KBs and one of the real powers of cloud-init came from extending functionality past this boundry and providing a standardized way to provision, or ah humm, "initialize" instances.Instead of using unwieldy bash scripts that probed whether it was an Ubuntu or a Redhat instance and used this OS method or that OS method (ex. use apt-get vs. rpm) to install a package, configure users, dns settings, mount a drive, etc. you could pass a yaml file starting with #cloud-config file that did a lot of this heavy lifting for you. Figure 2: User Data input field - cloud-config Similar to one of the benefits of Chef, Puppet, Salt or Ansible, it provided a reliable OS or distribution abstraction but where those approaches require external orchestration to configure instances, this was internally orchestrated from the very first boot which was more condusive to the "immutable" workflow. NOTE: cloud-init also compliments and helps boot strap those tools for more advanced or sophisticated workflows (ex. installing Chef/Puppet to keep long running non-immutable services under operation/management and preventing configuration drift). This brings us to another important distinction. Cloud-init originated as a project from Canonical (Ubuntu) and was designed for general purpose OSs. The BIG-IP's OS (TMOS) however is a highly customized, hardened OS so most of the modules don't strictly apply. Much of the Big-IP's configuration is consumed via its APIs (TMSH, iControl REST, etc.) and stored in it's database MCPD. We can still achieve some of the benefits of having cloud-init but instead, we will mostly leverage the simple bash processor. So when Auto Scaling BIG-IPs, there are a couple of approaches. 1) Creating a custom image as described in the official documentation. 2) Providing a cloud-init configuration This is a little lighter weight approach in that it doesn't require the customization work above. 3) Using a combination of the two, creating a custom image and leveraging cloud-init. For example, you may create a custom image with ASM provisioned, SSL certs/keys installed, and use cloud-init to configure additional environment specific elements. Disclaimer: Packaging is an art, just look at the rise of Docker and new operating systems. Ideally, the more you bake into the image upfront, the more predictable it will be and faster it deploys. However, the less you build-in, the more flexible you can be. Things like installing libraries, compiling, etc. are usually worth building in the image upfront. However, the BIG-IP is already a hardened image and things like installing libraries is not something required or recommended so the task is more about addressing the last/lighter weight configuration steps. However, depending on your priorities and objectives,installing sensitive keying material, setting credentials, pre-provisioning modules, etc. might make good candidates for investing in building custom images. Using Cloud-init with CloudFormation templates Remember when we talked about how you could use CloudFormation templates in a previous post to setup BIG-IP in a standard way? Because the CloudFormation service by itself only gave us the ability to lay down the EC2/VPC infrastructure, we were still left with remaining 80% of the work to do; we needed an external agent or service (in our case Ansible) to configure the BIG-IP and application services. Now, with cloud-init on the BIG-IP (using version 12.0 or later), we can perform that last 80% for you. Using Cloud-init with BIG-IP As you can imagine, there’s quite a lot you can do with just that one simple bash script shown above. However, more interestingly, we also installed AWS’s Cloudformation Helper scripts. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html to help extend cloud-init and unlock a larger more powerful set of AWS functionality. So when used with Cloudformation, our User Data simply changes to executing the following AWS Cloudformation helper script instead. "UserData": { "Fn::Base64": { "Fn::Join": [ "", [ "#!/bin/bash\n", "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s ", { "Ref": "AWS::StackId" }, " -r ", "Bigip1Instance", " --region ", { "Ref": "AWS::Region" }, "\n" ] ] } } This allows us to do things like obtaining variables passed in from Cloudformation environment, grabbing various information from the metadata service, creating or downloading files, running particular sequence of commands, etc. so once BIG-IP has finishing running, our entire application delivery service is up and running. For more information, this page discusses how meta-data is attached to an instance using CloudFormation templates: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-commands. Example BYOL and Utility CloudFormation Templates We’ve posted several examples on github to get you started. https://github.com/f5networks/f5-aws-cloudformation In just a few short clicks, you can have an entire BIG-IP deployment up and running. The two examples belowwill launch an entire reference stack complete with VPCs, Subnets, Routing Tables, sample webserver, etc. andshow the use of cloud-init to bootstrap a BIG-IP. Cloud-init is used to configure interfaces, Self-IPs, database variables, a simple virtual server, and in the case of of the BYOL instance, to license BIG-IP. Let’s take a closer look at the BIG-IP resource created in one of these to see what’s going on here: "Bigip1Instance ": { "Metadata ": { "AWS::CloudFormation::Init ": { "config ": { "files ": { "/tmp/firstrun.config ": { "content ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", "HOSTNAME=`curl http://169.254.169.254/latest/meta-data/hostname`\n ", "TZ='UTC'\n ", "BIGIP_ADMIN_USERNAME=' ", { "Ref ": "BigipAdminUsername " }, "'\n ", "BIGIP_ADMIN_PASSWORD=' ", { "Ref ": "BigipAdminPassword " }, "'\n ", "MANAGEMENT_GUI_PORT=' ", { "Ref ": "BigipManagementGuiPort " }, "'\n ", "GATEWAY_MAC=`ifconfig eth0 | egrep HWaddr | awk '{print tolower($5)}'`\n ", "GATEWAY_CIDR_BLOCK=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${GATEWAY_MAC}/subnet-ipv4-cidr-block`\n ", "GATEWAY_NET=${GATEWAY_CIDR_BLOCK%/*}\n ", "GATEWAY_PREFIX=${GATEWAY_CIDR_BLOCK#*/}\n ", "GATEWAY=`echo ${GATEWAY_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+1 }'`\n ", "VPC_CIDR_BLOCK=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${GATEWAY_MAC}/vpc-ipv4-cidr-block`\n ", "VPC_NET=${VPC_CIDR_BLOCK%/*}\n ", "VPC_PREFIX=${VPC_CIDR_BLOCK#*/}\n ", "NAME_SERVER=`echo ${VPC_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+2 }'`\n ", "POOLMEM=' ", { "Fn::GetAtt ": [ "Webserver ", "PrivateIp " ] }, "'\n ", "POOLMEMPORT=80\n ", "APPNAME='demo-app-1'\n ", "VIRTUALSERVERPORT=80\n ", "CRT='default.crt'\n ", "KEY='default.key'\n " ] ] }, "group ": "root ", "mode ": "000755 ", "owner ": "root " }, "/tmp/firstrun.utils ": { "group ": "root ", "mode ": "000755 ", "owner ": "root ", "source ": "http://cdn.f5.com/product/templates/utils/firstrun.utils " } "/tmp/firstrun.sh ": { "content ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", ". /tmp/firstrun.config\n ", ". /tmp/firstrun.utils\n ", "FILE=/tmp/firstrun.log\n ", "if [ ! -e $FILE ]\n ", " then\n ", " touch $FILE\n ", " nohup $0 0<&- &>/dev/null &\n ", " exit\n ", "fi\n ", "exec 1<&-\n ", "exec 2<&-\n ", "exec 1<>$FILE\n ", "exec 2>&1\n ", "date\n ", "checkF5Ready\n ", "echo 'starting tmsh config'\n ", "tmsh modify sys ntp timezone ${TZ}\n ", "tmsh modify sys ntp servers add { 0.pool.ntp.org 1.pool.ntp.org }\n ", "tmsh modify sys dns name-servers add { ${NAME_SERVER} }\n ", "tmsh modify sys global-settings gui-setup disabled\n ", "tmsh modify sys global-settings hostname ${HOSTNAME}\n ", "tmsh modify auth user admin password \ "'${BIGIP_ADMIN_PASSWORD}'\ "\n ", "tmsh save /sys config\n ", "tmsh modify sys httpd ssl-port ${MANAGEMENT_GUI_PORT}\n ", "tmsh modify net self-allow defaults add { tcp:${MANAGEMENT_GUI_PORT} }\n ", "if [[ \ "${MANAGEMENT_GUI_PORT}\ " != \ "443\ " ]]; then tmsh modify net self-allow defaults delete { tcp:443 }; fi \n ", "tmsh mv cm device bigip1 ${HOSTNAME}\n ", "tmsh save /sys config\n ", "checkStatusnoret\n ", "sleep 20 \n ", "tmsh save /sys config\n ", "tmsh create ltm pool ${APPNAME}-pool members add { ${POOLMEM}:${POOLMEMPORT} } monitor http\n ", "tmsh create ltm policy uri-routing-policy controls add { forwarding } requires add { http } strategy first-match legacy\n ", "tmsh modify ltm policy uri-routing-policy rules add { service1.example.com { conditions add { 0 { http-uri host values { service1.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 1 } }\n ", "tmsh modify ltm policy uri-routing-policy rules add { service2.example.com { conditions add { 0 { http-uri host values { service2.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 2 } }\n ", "tmsh modify ltm policy uri-routing-policy rules add { apiv2 { conditions add { 0 { http-uri path starts-with values { /apiv2 } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 3 } }\n ", "tmsh create ltm virtual /Common/${APPNAME}-${VIRTUALSERVERPORT} { destination 0.0.0.0:${VIRTUALSERVERPORT} mask any ip-protocol tcp pool /Common/${APPNAME}-pool policies replace-all-with { uri-routing-policy { } } profiles replace-all-with { tcp { } http { } } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled }\n ", "tmsh save /sys config\n ", "date\n ", "# typically want to remove firstrun.config after first boot\n ", "# rm /tmp/firstrun.config\n " ] ] }, "group ": "root ", "mode ": "000755 ", "owner ": "root " } }, "commands ": { "b-configure-Bigip ": { "command ": "/tmp/firstrun.sh\n " } } } } }, "Properties ": { "ImageId ": { "Fn::FindInMap ": [ "BigipRegionMap ", { "Ref ": "AWS::Region " }, { "Ref ": "BigipPerformanceType " } ] }, "InstanceType ": { "Ref ": "BigipInstanceType " }, "KeyName ": { "Ref ": "KeyName " }, "NetworkInterfaces ": [ { "Description ": "Public or External Interface ", "DeviceIndex ": "0 ", "NetworkInterfaceId ": { "Ref ": "Bigip1ExternalInterface " } } ], "Tags ": [ { "Key ": "Application ", "Value ": { "Ref ": "AWS::StackName " } }, { "Key ": "Name ", "Value ": { "Fn::Join ": [ " ", [ "BIG-IP: ", { "Ref ": "AWS::StackName " } ] ] } } ], "UserData ": { "Fn::Base64 ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s ", { "Ref ": "AWS::StackId " }, " -r ", "Bigip1Instance ", " --region ", { "Ref ": "AWS::Region " }, "\n " ] ] } } }, "Type ": "AWS::EC2::Instance " } Above may look like a lot at first but high level, we start by creating some files "inline" as well as “sourcing” some files from a remote location. /tmp/firstrun.config - Here we create a file inline, laying down variables from the Cloudformation Stack deployment itself and even the metadata service (http://169.254.169.254/latest/meta-data/). Take a look at the “Ref” stanzas. When this file is laid down on the BIG-IP disk itself, those variables will be interpolated and contain the actual contents. The idea here is to try to keep config and execution separate. /tmp/firstrun.utils – These are just some helper functions to help with initial provisioning. We use those to determine when the BIG-IP is ready for this particular configuration (ex. after a licensing or provisioning step). Note that instead of creating the file inline like the config file above, we simply “source” or download the file from a remote location. /tmp/firstrun.sh – This file is created inline as well and where it really all comes together. The first thing we do is load config variables from the firstrun.conf file and load the helper functions from firstrun.utils. We then create separate log file (/tmp/firstrun.log) to capture the output of this particular script. Capturing the output of these various commands just helps with debugging runs. Then we run a function called checkF5Ready (that we loaded from that helper function file) to make sure BIG-IP’s database is up and ready to accept a configuration. The rest may look more familiar and where most of the user customization takes place. We use variables from the config file to configure the BIG-IP using familiar methods like TMSH and iControl REST. Technically, you could lay down an entire config file (like SCF) and load it instead. We use tmsh here for simplicity. The possibilities are endless though. Disclaimer: the specific implementation above will certainly be optimized and evolve but the most important take away is we can now leverage cloud-init and AWS's helper libraries to help bootstrap the BIG-IP into a working configuration from the very first boot! Debugging Cloud-init What if something goes wrong? Where do you look for more information? The first place you might look is in various cloud-init logs in /var/log (cloud-init.log, cfn-init.log, cfn-wire.log): Below is an example for the CFTs below: [admin@ip-10-0-0-205:NO LICENSE:Standalone] log # tail -150 cfn-init.log 2016-01-11 10:47:59,353 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.us-east-1.amazonaws.com 2016-01-11 10:47:59,353 [DEBUG] Describing resource BigipEc2Instance in stack arn:aws:cloudformation:us-east-1:452013943082:stack/as-testing-byol-bigip/07c962d0-b893-11e5-9174-500c217b4a62 2016-01-11 10:47:59,782 [DEBUG] Not setting a reboot trigger as scheduling support is not available 2016-01-11 10:47:59,790 [INFO] Running configSets: default 2016-01-11 10:47:59,791 [INFO] Running configSet default 2016-01-11 10:47:59,791 [INFO] Running config config 2016-01-11 10:47:59,792 [DEBUG] No packages specified 2016-01-11 10:47:59,792 [DEBUG] No groups specified 2016-01-11 10:47:59,792 [DEBUG] No users specified 2016-01-11 10:47:59,792 [DEBUG] Writing content to /tmp/firstrun.config 2016-01-11 10:47:59,792 [DEBUG] No mode specified for /tmp/firstrun.config 2016-01-11 10:47:59,793 [DEBUG] Writing content to /tmp/firstrun.sh 2016-01-11 10:47:59,793 [DEBUG] Setting mode for /tmp/firstrun.sh to 000755 2016-01-11 10:47:59,793 [DEBUG] Setting owner 0 and group 0 for /tmp/firstrun.sh 2016-01-11 10:47:59,793 [DEBUG] Running command b-configure-BigIP 2016-01-11 10:47:59,793 [DEBUG] No test for command b-configure-BigIP 2016-01-11 10:47:59,840 [INFO] Command b-configure-BigIP succeeded 2016-01-11 10:47:59,841 [DEBUG] Command b-configure-BigIP output: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 40 0 40 0 0 74211 0 --:--:-- --:--:-- --:--:-- 40000 2016-01-11 10:47:59,841 [DEBUG] No services specified 2016-01-11 10:47:59,844 [INFO] ConfigSets completed 2016-01-11 10:47:59,851 [DEBUG] Not clearing reboot trigger as scheduling support is not available [admin@ip-10-0-0-205:NO LICENSE:Standalone] log # If trying out the example templates above, you can inspect the various files mentioned. Ex. In addition to checking for their general presence: /tmp/firstrun.config= make sure variables were passed as you expected. /tmp/firstrun.utils= Make sure exists and was downloaded /tmp/firstrun.log = See if any obvious errors were outputted. It may also be worth checking AWS Cloudformation Console to make sure you passed the parameters you were expecting. Single-NIC Another one of the important building blocks introduced with 12.0 on AWS and Azure Virtual Editions is the ability to run BIG-IP with just a single network interface. Typically, BIG-IPs were deployed in a multi-interface model, where interfaces were attached to an out-of-band management network and one or more traffic (or "data-plane") networks. But, as we know, cloud architectures scale by requiring simplicity, especially at the network level. To this day, some clouds can only support instances with a single IP on a single NIC. In AWS’s case, although they do support multiple NIC/multiple IP, some of their services like ELB only point to first IP address of the first NIC. So this Single-NIC configuration makes it not only possible but also dramatically easier to deploy in these various architectures. How this works: We can now attach just one interface to the instance and BIG-IP will start up, recognize this, use DHCP to configure the necessary settings on that interface. Underneath the hood, the following DB keys will be set: admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nic one-line sys db provision.1nic { value "enable" } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nicautoconfig one-line sys db provision.1nicautoconfig { value "enable" } provision.1nic = allows both management and data-plane to use the same interface provision.1nicautoconfig = uses address from DHCP to configure a vlan, Self-IP and default gateway. Ex. network objects automatically configured admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net vlan net vlan internal if-index 112 interfaces { 1.0 { } } tag 4094 } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net self net self self_1nic { address 10.0.1.65/24 allow-service { default } traffic-group traffic-group-local-only vlan internal } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net route net route default { gw 10.0.1.1 network default } Note: Traffic Management Shell and the Configuration Utility (GUI) are still available on ports 22 and 443 respectively. If you want to run the management GUI on a higher port (for instance if you don’t have the BIG-IPs behind a Port Address Translation service (like ELB) and want to run an HTTPS virtual on 443), use the following commands: tmsh modify sys httpd ssl-port 8443 tmsh modify net self-allow defaults add { tcp:8443 } tmsh modify net self-allow defaults delete { tcp:443 } WARNING: Now that management and dataplane run on the same interface, make sure to modify your Security Groups to restrict access to SSH and whatever port you use for the Mgmt GUI port to trusted networks. UPDATE: On Single-Nic, Device Service Clustering currently only supports Configuration Syncing (Network Failover is restricted for now due to BZ-606032). In general, the single-NIC model lends itself better to single-tenant or per-app deployments, where you need the advanced services from BIG-IP like content routing policies, iRules scripting, WAF, etc. but don’t necessarily care for maintaining a management subnet in the deployment and are just optimizing or securing a single application. By single tenant we also mean single management domain as you're typically running everything through a single wildcard virtual (ex. 0.0.0.0/0, 0.0.0.0/80, 0.0.0.0/443, etc.) vs. giving each tenant its own Virtual Server (usually with its own IP and configuration) to manage. However, you can still technically run multiple applications behind this virtual with a policy or iRule, where you make traffic steering decisions based on L4-L7 content (SNI, hostname headers, URIs, etc.). In addition, if the BIG-IPs are sitting behind a Port Address Translation service, it also possible to stack virtual services on ports instead. Ex. 0.0.0.0:444 = Virtual Service 1 0.0.0.0:445 = Virtual Service 2 0.0.0.0:446 = Virtual Service 3 We’ll let you get creative here…. BIG-IP Auto Scale Finally, the last component of Auto Scaling BIG-IPs involves building scaling policies via Cloudwatch Alarms. In addition to the built-in EC2 metrics in CloudWatch, BIG-IP can report its own set of metrics, shown below: Figure 3: Cloudwatch metrics to scale BIG-IPs based on traffic load. This can be configured with the following TMSH commands on any version 12.0 or later build: tmsh modify sys autoscale-group autoscale-group-id ${BIGIP_ASG_NAME} tmsh load sys config merge file /usr/share/aws/metrics/aws-cloudwatch-icall-metrics-config These commands tell BIG-IP to push the above metrics to a custom “Namespace” on which we can roll up data via standard aggregation functions (min, max, average, sum). This namespace (based on Auto Scale group name) will appear as a row under in the “Custom metrics” dropdown on the left side-bar in CloudWatch (left side of Figure 1). Once these BIG-IP or EC2 metrics have been populated, CloudWatch alarms can be built, and these alarms are available for use in Auto Scaling policies for the BIG-IP Auto Scale group. (Amazon provides some nice instructions here). Auto Scaling Pool Members and Service Discovery If you are scaling your ADC tier, you are likely also going to be scaling your application as well. There are two options for discovering pool members in your application's auto scale group. 1) FQDN Nodes For example, in a typicalsandwichdeployment, your application members might also be sitting behind an internal ELB so you would simply point your FQDN node at the ELB's DNS. For more information, please see: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-implementations-12-1-0/25.html?sr=56133323 2) BIG-IP's AWS Auto Scale Pool Member Discovery feature (introduced v12.0) This feature polls the Auto Scale Group via API calls and populates the pool based on its membership. For more information, please see: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-autoscaling-amazon-ec2-12-1-0/2.html Putting it all together The high-level steps for Auto Scaling BIG-IP include the following: Optionally* creating an ElasticLoadBalancer group which will direct traffic to BIG-IPs in your Auto Scale group once they become operational. Otherwise, will need Global Server Load Balancing (GSLB). Creating a launch configuration in EC2 (referencing either custom image id and/or using Cloud-init scripts as described above) Creating an Auto Scale group using this launch configuration Creating CloudWatch alarms using the EC2 or custom metrics reported by BIG-IP. Creating scaling policies for your BIG-IP Auto Scale group using the alarms above. You will want to create both scale up and scale down policies. Here are some things to keep in mind when Auto Scaling BIG-IP ● BIG-IP must run in a single-interface configuration with a wildcard listener (as we talked about earlier). This is required because we don't know what IP address the BIG-IP will get. ● Auto Scale Groups themselves consist of utility instances of BIG-IP ● The scale up time for BIG-IP is about 12-20 minutes depending on what is configured or provisioned. While this may seem like a long-time, creating the right scaling policies (polling intervals, thresholds, unit of scale) make this a non-issue. ● This deployment model lends itself toward the themes of stateless, horizontal scalability and immutability embraced by the cloud. Currently, the config on each device is updated once and only once at device startup. The only way to change the config will be through the creation of a new image or modification to the launch configuration. Stay tuned for a clustered deployment which is managed in a more traditional operational approach. If interesting in exploring Auto Scale further, we have put together some examples of scaling the BIG-IP tier here: https://github.com/f5devcentral/f5-aws-autoscale * Above github repository also provides some examples of incorporating BYOL instances in the deployment to help leverage BYOL instances for your static load and using Auto Scale instances for your dynamic load. See the READMEs for more information. CloudFormation templates with Auto Scaled BIG-IP and Application Need some ideas on how you might leverage these solutions? Now that you can completely deploy a solution with a single template, how about building service catalog items for your business units that deploy different BIG-IP services (LB, WAF) or a managed service build on top of AWS that can be easily deployed each time you on-board a new customer?2.8KViews0likes7CommentsOSPF on F5 "Can't setsockopt IP_MULTICAST_IF"
Hi all, I'm trying to setup a OSPF relation between a Juniper SRX and F5(VM). SRX config is the following configured(some part out cut out, like policies, all is permited): routing-options { router-id 192.168.203.1; protocols { ospf { area 0.0.0.0 { interface ge-0/0/15.1203 { neighbor 192.168.203.203; }}}} security-zone TEST { address-book { address NET_192.168.203.0 192.168.203.0/24; } host-inbound-traffic { system-services { ntp; dns; ping; all; } protocols { ospf; all; interfaces { ge-0/0/15.1203 { host-inbound-traffic { system-services { bootp; ping; dns; ntp; } protocols { ospf; } }}}} On the F5 i've created the following: Create Partition PD_1 Create Route-Domain RD_1. This is also the Default for PD_1 and also the Path for this route domain is PD_1 Vlan1203 created and its partition is on PD_1 and Tag is 1203, interface is 1.1 Untagged. On the RD_1 i've added vlan1203 on it with ospfv2 on it Create Self IP. IP is int he /24 with VLAN1203 and partition is PD_1, portlockdown is allow all, non-floating The config for the RD_1 is the following: baba.nl[1]sh run ! no service password-encryption ! log file /var/log/zebos.log ! interface lo ! interface /PD_1/VLAN1203 ip ospf network point-to-multipoint also tried this to be NBMA, broadcast, p2p ip ospf hello-interval 3 ip ospf dead-interval 3 ip ospf priority 0 ! router ospf 199 ospf router-id 192.168.41.103 redistribute kernel network 192.168.202.0 0.0.0.255 area 0.0.0.0 network 192.168.203.0 0.0.0.255 area 0.0.0.0 ! line con 0 login line vty 0 39 login ! end What I even try, i Always get the following errors: 2016/07/12 02:00:20 informational: OSPF Instance Id [199]: LSA[Refresh]: timer expired 2016/07/12 02:00:22 informational: OSPF Instance Id [199]: IFSM[/PD_1/VLAN1203:192.168.203.203]: Hello timer expire 2016/07/12 02:00:22 warnings: OSPF Instance Id [199]: OS[/PD_1/VLAN1203:192.168.203.203]: Can't setsockopt IP_MULTICAST_IF: Cannot assign requested address How can I get the OSPF work? root@(baba)(cfg-sync Standalone)(Active)(/Common)(tmos) show sys version Sys::Version Main Package Product BIG-IP Version 12.0.0 Build 1.0.628 Edition Hotfix HF1 Date Mon Jan 11 09:43:58 PST 2016Solved500Views0likes1CommentSuccessfully Deploy Your Application in the AWS Public Cloud: Part 1 of 4
In this series of articles, we're going to walk you through a fairly typical lift-and-shiftdeployment of BIG-IP in AWS, so that: If you’re just starting, you can get an idea of what lies ahead. If you’re already working in the cloud, you can get familiar with a variety of F5 solutions that will help your application and organization be successful. The scenario we've chosen is pretty typical: we have an application in our data center and we want to move it to the public cloud. As part of this move, we want to give development teams access to an agile environment, and we want to ensure that NetOps/SecOps maintains the stability and control they expect. Here is a simple diagram for a starting point. We’re a business that sells our products on the web. Specifically, we sell a bunch of random picnic supplies and candies. Our hot seller this summer is hotdog-flavored lemonade, something you might think is appalling but that really encompasses everything great about a picnic. But back to the scenario: We have a data center, where we have two physical BIG-IPs that function as a web application firewall (WAF), and they load balance traffic securely to three application servers. These application servers get their product information from a product database. Our warehouse uses a separate internal application to manage inventory, and that inventory is stored in an inventory database. In this series of articles, we’ll show you how to move the application to Amazon Web Services (AWS), and discuss the trade-offs that come at different stages in the process. So let’s get started. The challenge Move to the cloud; keep environments in sync The solution Use a CloudFormation Template (CFT) to create a repeatable cloud deployment We’ve been told to move to the cloud, and after a thorough investigation of the options, have decided to move our picnic-supply-selling app to Amazon Web Services. Our organization has several different environments that we maintain. Dev (one environment per developer) Test UAT Performance Production These environments can tend to be out of sync with one another. This frustrates everyone. And when we deploy the app to production, we often see unexpected results. If possible, we don’t want to bring this problem along to the cloud. We want to deploy our application to all of these environments and have the result be the same every time. Even if each developer has different code, all developers should be working in an infrastructure environment that matches all other environments, most importantly, production. Enter the AWS CloudFormation template. We can create a template and use it to consistently spin up the same environment. If we require a change, we can make the modification and save a new version of the CFT, letting everyone on the team know about the change. And it’s version-controlled, so we can always roll back if we mess up. So we use a CFT to create our application servers and deploy the latest code on them. In our scenario, we create an AWS Elastic Load Balancer so we can continue load balancing to the application servers. Our product data has a dependency on inventory data that comes from the warehouse, and we use BIG-IP for authentication (among other things). We use our on-premise BIG-IPs to create an IPSEC VPN tunnel to AWS. This way, our application can maintain a connection to the inventory system. When we get the CFT working the way we want, we can swing the DNS to point to these new AWS instances. Details about the CloudFormation template We put a CFT on github that you can deploy to demonstrate the AWS part of this setup. It may help you visualize this deployment, and in part 2 of this series, we'll be expanding on this initial setup. If you'd like, you can deploy by clicking the following button. Ensure that when you're in the AWS console, you select the region where you want to deploy. And if you're really new to this, just remember that active instances cost money. The CFT creates four Windows servers behind an AWS Elastic Load Balancer (ELB). Three of the servers are running a web app and one is used for the database. Beware, the website is a bit goofy and we were feeling punchy when we created it. Here is a brief explanation of what specific sections of the CFT do. Parameters The Parameters section includes fields you must populate when deploying the CFT. In this case, you’ll have to specify a name for your servers, and the AMI (Amazon Machine Image) ID to build the servers from. In the template, you can see what parameters look like. For example, the field where you enter the AMI ID: "WindowsAMI": { "Description": "Windows Version and Region AMI", "Type": "String", } To find the ID of the AMI you want to use, look in the marketplace, find the product you want, click the Manual Launch tab, and note the AMI ID for the region where you’re going to deploy. We are using Microsoft Windows Server 2016 Base and Microsoft Windows Server 2016 with MSSQL 2016. Note: These IDs can change; check the AWS Marketplace for the latest AMI IDs. Resources The resources section of the CFT performs the legwork. The CFT creates a Virtual Private Cloud (VPC) with three subnets so that the application is redundant across availability zones. It creates a Windows Server instance in each availability zone, and it creates an AWS Elastic Load Balancer (ELB) in front of the application servers. Code that creates the load balancer: "StackELB01": { "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "Subnets" : [ { "Ref": "StackSubnet1" }, { "Ref": "StackSubnet2" }, { "Ref": "StackSubnet3" } ], "Instances": [ { "Ref": "WindowsInstance1" }, { "Ref": "WindowsInstance2" }, { "Ref": "WindowsInstance3" } ], "Listeners": [ { "LoadBalancerPort": "80", "InstancePort": "80", "Protocol": "HTTP" } ], "HealthCheck": { "Target": "HTTP:80/", "HealthyThreshold": "3", "UnhealthyThreshold": "5", "Interval": "30", "Timeout": "5" }, "SecurityGroups":[ { "Ref": "ELBSecurityGroup" } ] } Then the CFT uses Cloud-Init to configure the Windows machines. It installs IIS on each machine, sets the hostname, and creates an index.html file that contains the server name (so that when you load balance to each machine, you will be able to determine which app server is serving the traffic). It also adds your user to the machine’s local Administrators group. Note: This is just part of the code. Look at the CFT itself for details. "install_IIS": { "files": { "C:\\Users\\Administrator\\Downloads\\firstrun.ps1": { "content": { "Fn::Join": [ "", [ "param ( \n", " [string]$password,\n", " [string]$username,\n", " [string]$servername\n", ")\n", "\n", "Add-Type -AssemblyName System.IO.Compression.FileSystem\n", "\n", "## Create user and add to Administrators group\n", "$pass = ConvertTo-SecureString $password -AsPlainText -Force\n", "New-LocalUser -Name $username -Password $pass -PasswordNeverExpires\n", "Add-LocalGroupMember -Group \"Administrators\" -Member $username\n", The CFT then calls PowerShell to run the script. "commands": { "b-configure": { "command": { "Fn::Join": [ " ", [ "powershell.exe -ExecutionPolicy unrestricted C:\\Users\\Administrator\\Downloads\\firstrun.ps1", { "Ref": "adminPassword" }, { "Ref": "adminUsername"}, { "Ref": "WindowsName1"}, "\n" ] Finally, this section includes signaling. You can use the Cloud-Init cfn-signal helper script to pause the stack until resource creation is complete. For more information, see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-signal.html. Sample of signaling: "WindowsInstance1WaitHandle": { "Type": "AWS::CloudFormation::WaitConditionHandle" }, "WindowsInstance1WaitCondition": { "Type": "AWS::CloudFormation::WaitCondition", "DependsOn": "WindowsInstance1", "Properties": { "Handle": { "Ref": "WindowsInstance1WaitHandle" }, "Timeout": "1200" } } Outputs The output includes the URL of the AWS ELB, which you use to connect to your applications. "Outputs": { "ServerURL": { "Description": "The AWS Generated URL.", "Value": { "Fn::Join": [ "", [ "http://", { "Fn::GetAtt": [ "StackELB01", "DNSName" ] } This output is displayed in the AWS console, on the Outputs tab. You can use the link to quickly connect to the ELB. When we're done deploying the app and we’ve fully tested it in AWS, we can swing the DNS from the internal address to the AWS load balancer, and we’re up and running. Come back next week as we implement BIG-IP VE for security in AWS.472Views0likes0CommentsWhat am I doing wrong with this network configuration for KVM (F5 virtual edition)
Hello, I have been trying to setup an F5 lab using KVM on debian. I currently have the following network configuration (/etc/network/interfaces - see output pasted at end of post) although whenever I go to create the VM in KVM, only one tap is associated with the bridge (trying to use the taps for management, external, and internal interfaces on the F5 VM). (see screenshot at end of post) Would anyone have any suggestions for configuring networking properly for this setup? I'm open to anything at this point in time. Thanks for your help. source /etc/network/interfaces.d/* auto lo iface lo inet loopback allow-hotplug eth0 iface eth0 inet dhcp iface eth0 inet6 auto auto br0 iface br0 inet dhcp pre-up ip tuntap add dev tap0 mode tap user root pre-up ip tuntap add dev tap1 mode tap user root pre-up ip tuntap add dev tap2 mode tap user root pre-up ip link set tap0 up pre-up ip link set tap1 up pre-up ip link set tap2 up bridge_ports all tap0 tap1 tap2 bridge_stp off bridge_maxwait 0 bridge_fd 0 post-down ip link set tap0 down post-down ip link set tap1 down post-down ip link set tap2 down post-down ip tuntap del dev tap0 mode tap post-down ip tuntap del dev tap1 mode tap post-down ip tuntap del dev tap2 mode tap263Views0likes1CommentHow to best Create Big-IP lab on VE from Physical production configuration to test upgrade?
I'm in the process of upgrading our physical Big-IP LTMs and would like to import as much of the configuration as possible (while maintaining VE management configuration) into a virtual edition lab to perform a mock upgrade. I exported the SCF from the source physical and the VE for comparison. I found K81271448: Merging BIG-IP configuration objects into the running configuration using tmsh https://support.f5.com/csp/article/K81271448 So it looks like I could remove portions from physical source configuration file and massage the rest, and merge. I converted the vlans to use the last interface on the VE (and disconnected from the VM). But which parts of the config should I keep, and which should I remove prior to merging? I also read that a UCS configuration might be more appropriate to export and import. What is the best recommendation to migrate production Big-IP configuration to a VE lab to test an upgrade prior to actual upgrade?421Views0likes2CommentsVE lost communication after upgrade to ESXi 6.7 Update 2
After upgrade ESXi 6.7 from build 13004448 (last public patch before Update 2) to 13006603 (Update 2), VE 14.1.0.3 lost communication on TMM interfaces (mgmt worked fine). I did some research then. Egress traffic from TMM went correctly (another VM could see it), but ingress traffic did not fall into TMM. Tried to send burst of 10k broadcast ARP requests in VLAN with TMM interfaces connected, but with no adequate change in stats (tmsh show net interface all-properties raw) on TMM interfaces (stats of mgmt interface, connected to the same VLAN, are increased adequately). Restarting VE did not help. Restarting ESXi did not help. After rolling the ESXi back to the previous build, everything started to work. Tried once again upgrade to 6.7u2 and rollback, with the same results. Speculation: Since Update 2 release notes mention bugfix in vmxnet3, this may be related. AFAIK, TMM takes care of interfaces by itself (in linux, these are visible in lspci, but neither in ifconfig, ip link nor /sys/class/net/), so there could have been made some change in host-side vmxnet3 that is not yet reflected in TMM code. Has anyone similar experience with ESXi 6.7u2? Best regards, Ondrej627Views0likes5CommentsMultiple DNS Views?
Hi, in our environment we have bind (tcp/udp port 53) behind an L4 virtual server on big-ip ltm virtual edition. "Views" define where the DNS traffic should be going - is there an easy and effective way to maintain source information while routing to the backend servers in our pool? Are we able to have multiple DNS views to accommodate for the different client request sources? OR Is there a better way to handle this, such as with irules, snat pools, etc? Thanks!306Views0likes1Commentimportance of having dedicated vCPU for BIG-IP VE installation ?
Can you help me check the importance of having dedicated vCPU for BIG-IP VE installation ? For OVA installation, we have gone with VPC standard; (ie. no CPU reservation; no Thick provision) Can you advise if we have concerns on: a.No CPU reservation b.No Thick provisioning ** Note: The VPC team monitors the whole Hypervisor Frame and ensure util of CPU never exceed 40%; and guarantee disk space availability even VM is provision is Thin.189Views1like0CommentsBIG-IP VE - VMWare ESXi recommended settings
How well does Big-IP F5 work on VMWare Dynamic resource scheduler ? we have successfully migrated a standalone physical to Virtual but soon after we had to revert as their were reach-ability issues with the virtual servers ? any guess on what could have caused the issue ? currently the security settings on VMware for Promiscuous mode, mac address change and forged transits are set to reject. -- I suspect the issue might have been because of this setting.. but our device is only standalone. changing the vm settings is not easy without justification.435Views0likes1Comment