big-ip virtual edition
29 TopicsF5 in AWS Part 1 - AWS Networking Basics
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 If you work in IT, and you haven’t been living under a rock, then you have likely heard of Amazon Web Services (AWS). There has been a substantial increase in the maturity and stability of the AWS Elastic Compute Cloud (EC2), but you are wondering – can I continue to leverage F5 services in AWS? In this series of blog posts, we will discuss the how and why of running F5 BIG-IP in EC2. In this specific article, we’ll start with the basics of the AWS EC2 and Virtual Private Cloud (VPC). Later in the series, we will discuss some of the considerations associated with running BIG-IP as compute instance in this environment, we’ll outline the best deployment models for your application in EC2, and how these deployment models can be automated using open-source tools. Note: AWS uses the terms "public" and "private" to refer to what F5 Networks has typically referred to as "external" and "internal" respectively. We will use this terms interchangeably. First, what is AWS? If you have read the story, you will know that the EC2 project began with an internal interest at Amazon to move away from messy, multi-tenant networks using VLANs for segregation. Instead, network engineers at Amazon wanted to build an entirely IP-based architecture. This vision morphed into the universe of application services available today. Of course, building multi-tenant, purely L3 networks at massive scale had implications for both security and redundancy (we’ll get to this later). Today, EC2 enables users to run applications and services on top of virtualized network, storage, and compute infrastructure, where hosts are deployed in the form of Amazon Machine Images (AMIs). These AMIs can either be private to the user or launched from the public AWS marketplace. Hosts can be added to elastic load balancing (ELB) groups and associated with publicly accessible IPs to implement a simple horizontal model for availability. AWS became truly relevant for the enterprise with the introduction of the Virtual Private Cloud service. VPCs enabled users to build virtual private networks at the IP layer. These private networks can be connected to on-premise configurations by way of a VPN Gateway, or connected to the internet via an Internet Gateway. When deploying hosts within a VPC, the user has a significant amount of control over how each host is attached to the network. For example, a host can be attached to multiple networks and given several public or private IPs on one or multiple interfaces. Further, users can control many of the security aspects they are used to configuring in an on-premise environment (albeit in a slightly different way), including network ACLs, routing, simple firewalling, DHCP options, etc. Lets talk about these and other important EC2 aspects and try to understand how they affect our application deployment strategy. L2 Restrictions As we mentioned above, one of the design goals of AWS was to remove layer 2 networking. This is a worthy accomplishment but we lose access to certain useful protocols, including ARP (and gratuitious ARP), broadcast and multi-cast groups, 802.1Q tagging. We can no longer use VLANs for some availability models, for quality of service management, or for tenant isolation. Network Interfaces For larger topologies, one of the largest impacts given the removal of 802.1Q protocol support is the number of subnets we can attach to a node in the network. Because in AWS each interface is attached as a layer 3 endpoint, we must add an interface for each subnet. This contrasts with traditional networks, where you can add VLANs to your trunk for each subnet via tagging. Even though we're in a virtual world, the number of virtual network interfaces (or Elastic Network Interfaces (ENIs) in AWS terminology) is also limited according to the EC2 instance size. Together, the limits on number of interfaces and mapping between interface and subnet effectively limit the number of directly connected networks we can attach to a device (like BIG-IP, for example). IP Addressing AWS offers two kinds of globally routable IP address; these are “Public IP Addresses” and “Elastic IP Address”. In the table below, we outlined some of the differences between these two types of IP addresses. You can probably figure out for yourself why we will want to use Elastic IPs with BIG-IP. Like interfaces, AWS limits the number of IPs in several ways, including the number of IPs that can be attached to an interface and the number of elastic IPs per AWS account. Table 1: Differences between Public and Elastic IP Addresses Public IP Elastic IP Released on device termination/disassociation YES NO Assignable to secondary interfaces NO YES Can be associated after launch NO YES Amazon provides more information on public and elastic IP addresses here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-public-addresses Each interface on an EC2 instance is given a private IP address. This IP address is routable locally through your subnet and assigned from the address range associated with the subnet to which your interface is attached. Multiple private secondary IP addresses can be attached to an interface, and is a useful technique for creating more complex topologies. The number of interfaces and private IPs per interface within an Amazon VPC are listed here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI NAT Instances, Subnets and Routing When creating a VPC using the wizard available in the AWS VPC web portal, several default configurations are possible. One of these configurations is “VPC with Public and Private subnets”. In this configuration, what if the instances on our private subnet wish to access the outside world? Because we cannot attach public or elastic IP address to instances within the private subnet, we must use NAT provided by AWS. Like BIG-IP and other network devices in EC2, the NAT instance will live as a compute node within your VPC. This is good way to allow outbound traffic from your internal servers, but to prevent those servers from receiving inbound traffic. When you create subnets manually or through the VPC wizard, you’ll note that each subnet has an associated routing table. These route tables may be updated to control traffic flow between instances and subnets in your VPC. Regions and Availability Zones We know quite a few number of people who have been confused by the concept of availabilty zones in EC2. To put it clearly, an availabilty zone is a physically isolated datacenter in a region. Regions may contain mulitple availability zones. Availabilty zones run on different networking and storage infrastructure, and depend on seperate power supplies and internet connections. Striping your application deployments across availability zones is a great way to provide redundancy and, perhaps a hot standby, but please note that these are not the same thing. Amazon does not mirror any data between zones on behalf of the customer. While VPCs can span availability zones, subnets may not. To close this blog post, we are fortunate enough to get a video walk through from Vladimir Bojkovic, Solution Architect at F5 Networks. He shows how to create a VPC with internal and external subnets as a practical demonstration of the concepts we discussed above.4.1KViews2likes4CommentsF5 in AWS Part 2 - Running BIG-IP in an EC2 Virtual Private Cloud
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 Previously in this series, we discussed the networking fundamentals of Virtual Private Clouds (VPC) in Amazon’s Elastic Compute Cloud (EC2). Some of the topics we touched on include the impact of the removal of layer 2 access, limits on network elements like the number of interfaces and publicly routable IP addresses, and how to manage routing within your subnets. Today we’ll cover licensing models and images available in Amazon, sizing requirements, including the number of interfaces assignable to BIG-IP, some basic network topologies, and how you can use Amazon CloudFormation templates to make your life easier when deploying BIG-IP. Licensing Models There are two ways you can run BIG-IP in AWS, at an utility rateor Bring Your Own License (BYOL). Utility Model you pay Amazon both for the compute and disk requirements of the instances AND for the BIG-IP software license at an hourly rate There are two forms: hourly or annual subscriptions. Using annual licenses you can save 37%. Follow the instructions on AWS to purchase an annual subscription. When launching hourly instances, the devices boot into a licensed state and are immediately ready for provisioning BYOL Model You do pay Amazon only for the compute + disk footprint, notfor the F5 software license. Version Plus licenses (like "V12" or "V13") can be reused in Amazon if you have them from previous deployments You must license the device after it launches, either manually or in an orchestration manner. Available as individual licenses, or in volume as license pools. All in all, the utility licensing model offers significant flexibility to scale up your infrastructure to meet demand, while reducing the amount you pay for base traffic throughput. It may be advantagious to use this model if you experience large traffic swings. In contrast, you may be able to achieve this flexibility at a lower cost using BYOL license pools. With volume (pool) licensing, licenses can be reused across devices as you ramp these instances up/down. In addition to choosing between utility and BYOL licenses models, you’ll also need to choose the licensed features and the throughput level. When taking a BYOL approach, the license (which you may have already) will have a max throughput level and will be associated with a Good/Better/Best (GBB) package. For more information on GBB, see Simplified Licensing: Compare our Good, Better, Best product bundles. When deciding on the throughput level, you may license up to 1Gbit/s using hourly AMIs. It is possible to import a 3Gbit/s VE license in AWS, but note that AWS caps the throughput on an instance to 2Gbit/s, so you will be limited by Amazon EC2 restrictions, rather than F5. Driving 2Gbit/s through your virtual instance in AWS will require careful implementation of your configuration in BIG-IP. Also, note that the throughput restrictions on each image include both data plane and management traffic. You can read more about throughput restrictions for virtual instances here: K14810: Overview of BIG-IP VE license and throughput limits. Once you have an chosen a license model, GBB package, and throughput, select the corrosponding AMI in the Amazon Marketplace. Disk and Compute Recommendations: An astute individual will wonder why there exist separate images for each GBB package. In an effort to maintain the smallest footprint possible, each AMI includes just enough disk volume for licensed features. Each GBB package has different disk of requirements which are built into the AMI. For evidence of this, use the AWS CLI to see details on a specific image: aws ec2 describe-images --filter "Name=name,Values=*F5 Networks BYOL BIGIP-13.1.0.2.0.0.6* Truncated output: { "Images": [ { "ProductCodes": [ { "ProductCodeId": "91wwm31qya4s3rkc5bv4jq9b3", "ProductCodeType": "marketplace" } ], "Description": "F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM", "VirtualizationType": "hvm", "Hypervisor": "xen", "ImageOwnerAlias": "aws-marketplace", "EnaSupport": true, "SriovNetSupport": "simple", "ImageId": "ami-3bbd0243", "State": "available", "BlockDeviceMappings": [ { "DeviceName": "/dev/xvda", "Ebs": { "Encrypted": false, "DeleteOnTermination": true, "VolumeType": "gp2", "VolumeSize": 82, "SnapshotId": "snap-0c9beaa9345422784" } } ], "Architecture": "x86_64", "ImageLocation": "aws-marketplace/F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM-98eb3c1e-ab48-41ff-9c94-d71a5d08e49f-ami-0c93b176.4", "RootDeviceType": "ebs", "OwnerId": "679593333241", "RootDeviceName": "/dev/xvda", "CreationDate": "2018-01-24T19:58:31.000Z", "Public": true, "ImageType": "machine", "Name": "F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM-98eb3c1e-ab48-41ff-9c94-d71a5d08e49f-ami-0c93b176.4" }, From the above, you can see that the Good BYOL image configures a single Elastic Block Store (EBS) 31 Gb volume, whereas the Best image comes with two EBS volumes, totaling 124Gb in space. On the discussion of storage, we would like to take a moment to focus on analytics. While the analytics module is licensed in the "Good" package, you may need additional disk space in order to provision this module. See this link (https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-setup-msft-hyper-v-11-5-0/3.html) for increasing the disk space on a specific volumes. Another option for working around this issue is to use a “Better” AMI. This will ensure you have enough space to provision the analytics module. In addition to storage, running BIG-IP as a compute node in EC2 also requires a minimum number of interfaces, vCPUs and RAM. AskF5's Virtual Edition and Supported Hypervisors Matrix provides a list of recommended instance types although you can choose alternatives as long as they support your architecture's configurations. In short, as you choose higher performance instance types in EC2, you get more RAM and more network interfaces. This will allow you to create more advanced topologies and services. Basic Network Topologies So with a limited number of interfaces, how do you build a successful multi-tier application architecture? Many customers might start with a directly connected architecture like that shown below: In this architecture, 10.0.0.100 is the virtual server. The address matches an EC2 private IP on the external interface, either the first assigned to that interface (the primary private IP) or a secondary private IP. Not shown is the Elastic IP address (EIP) which maps to this private IP. We recommend using the primary private IP as the external self-IP on BIG-IP. An Elastic IP can then be attached to this primary private IP to allow outbound calls. The 1:1 NAT performed by Amazon between the public (elastic) IP and private IP is invisible to BIG-IP. Keep in mind that a publicly routable self-IP is required to use the BIG-IP failover mechanism, which makes HTTP calls to AWS. We’ll discuss failover in a few moments. Secondary privates IPs and corresponding EIPs on the external interface can then be used for each virtual server. Given this discussion about interfaces and EIPs, be sure to consider that the instance type you choose in Amazon will dictate how many virtual servers you can run on BIG-IP. For example, given a m3.xlarge (allowed three interfaces) and the default account limited of 5 EIPs, you will be limited to 3 virtual servers. In this case, one interface will be used to attach each of the management, external, and internal subnets. On the external interface, you would attach 3 secondary interfaces, each with an EIP. The other two EIPs would be used for the management port and external self-IP. To get more interfaces, move up instance sizes ( -> m3.xxlarge). To get more EIPs, request them from Amazon. If you do use an EIP for the management port, be sure to ACL it appropriately. The benefit of the directly connected architecture shown above, where BIG-IP can serve as the default gateway, is that each node in the tier can communicate with other nodes in the tiers and leverage virtual listeners on BIG-IP without having to be SNATed. This is sometimes preferred as it makes it simpler to implement E-W security & analytics.The problem, as shown below, is that as application or tenant density increases so does the number of required interfaces. Alternatively, routed architectures (shown below) where pool members live on remote networks, are more easily migrated and suitable to situations with limited network interfaces. In the case below, the route table for all pool members must be contain a default route that leads back to BIG-IP. By doing so, you can: leverage BIG-IP for outbound use case (secure outbound traffic) return internet traffic back through the BIG-IP and avoid SNAT’ing your internet facing VIPs . Note: requires disabling SRC/DST Check on your BIG-IP instances/interfaces An alternate, and perhaps more realistic view of the above looks like: Finally, it may make sense to attach an additional interfaces for each application to increase the application density BIG-IP: These routed architectures allow you to reduce the number of interfaces used to connect internal networks, which then enables you to leverage the remaining interfaces to increase application density. Two potential drawbacks include the requirement of SNAT (as BIG-IP is no longer inline to intercept response traffic) and adding an additional network hop. The up/down stream router will generally intercept the return traffic because the client is also on a directly connected or closer network. Elastic IPs = Floating IPs and API Based Failover After you have figured how to incorporate BIG-IP into your network, the final step before deploying applications and network services will be ensuring you can maintain high-availability. One of the challenges in adapting BIG-IP for public clouds was that the availability model of BIG-IP (“Device Service groups”, or DSCs) was tightly coupled to sharing L2/L3 floating addresses in the same L2 segment. An active device made an L2 broadcast (GARP) to take over "Active" ownership of IP addresses and other network listeners. In accordance with the removal of L2, BIG-IP has adapted and replaced the GARP failover method with API calls to Amazon. These API calls toggle ownership of Amazon secondary private-IP addresses between devices. Any EIPs which map to these secondary IP addresses will now point to the new active device. Note here that floating IPs in BIG-IP speak are now equivalent to secondary IPs in the EC2 world. One issue to be aware of with the API-based failover mechanism is the increase in failover time to =~ 10 sec per EIP. This is the time it takes for changes to propagate in AWS’s network. While this downtime is still significantly less than a DNS timeout, it is troublesome as BIG-IP’s Device Service Group (DSC) feature was specifically introduced to provide sub-second failover. Newer applications built for cloud are typically designed to handle these changes in availability concepts, but this makes it more challenging to shift traditional workloads to layer-3-only environments like AWS. Historically, the DSC group feature has also allowed the use of BIG-IP as a highly available default gateway. This was accomplished by directing the default route to the internal floating self-IP on a cluster or by directly connecting application servers. In Amazon, the default route may point to an internet gateway or a device interface, but not a statically named IPaddress. We'll leave the fix for this problem for the next article where we will also talk about other deployment models of BIG-IP in AWS, including those which span availability zones. CloudFormation Templates To close this article, we’ve decided to provide examples of how BIG-IP can be deployed using CloudFormation Templates (CFT) in AWS. CloudFormation is an AWS service that enables you to define a set of EC2 resources that can be automatically and deterministically deployed in your account. These application “stacks” are defined in JSON, making them easier to read and share. F5 provides serveral CFTs with options for licensing model, high availability, and auto-scaling (for LTM and WAF modules). Please review the Github Big-IP Version Matrix for AWS CFT Templates document within our f5-aws-cloudformation repository to determine your deployment requirements. Enjoy!3.4KViews1like3CommentsF5 in AWS Part 3 - Advanced Topologies and More on Highly Available Services
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 Thus far in our article series about running BIG-IP in EC2, we’ve talked about some VPC/EC2 routing and network concepts, and we walked through the basics of running and licensing BIG-IP in this environment. It’s time now to discuss some more advanced topologies that will provide highly redundant and highly available network services for your applications. As we touched upon briefly in our last article, failover between BIG-IP devices has typically relied upon L2 networking protocols to reach sub-second failover times. We’ve also hinted over this series of articles as to how your applications might need to change as they move to AWS. We recognize that while some applications will see the benefit of a rewrite, and will perhaps place fewer requirements on the network for failover, other applications will continue to require stateful mechanisms from the network in order to be highly available. Below we will walk through 3 different topologies with BIG-IP that may make sense for your particular needs. We leave a 4th, auto-scale of BIG-IP released in version 12.0, for a future article. Each of the topologies we list has drawbacks and benefits, which may make them more or less useful given your tenancy models, SLAs, and orchestration capabilities. Availability Zones We've mentioned them before, but when discussing application availability in AWS, it would be negligent to skip over the concept of Availability Zones. At a high-level, these are co-located, but physically isolated datacenters (separate power/networking/etc) in which EC2 instances are provisioned. For a more detailed/accurate description, see the official AWS docs: What Is Amazon EC2: Regions and Availability Zones Because availability zones are geographically close in proximity, the latency between them is very low (2~3 ms). Because of this, they can be treated as one logical data center (latency is low enough for DB tier communication). AWS recommends deploying services across at least two AZs for high availability. To distribute services across geographical areas, you can of course leverage AWS Regions with all the caveats that geographically dispersed datacenters present on the application or database tiers. Let's get down to it, and examine our first model for deploying BIG-IP in a highly available fashion in AWS. Our first approach will be very simple: deploy BIG-IP within a single zone in a clustered model. This maps easily to the traditional network environment approach using Device Service Clusters (DSC) we are used seeing with BIG-IP. Note: in the following diagrams we have provided detailed IP and subnet annotations. These are provided for clarity and completeness, but are by no means the only way you may set up your network. In many cases, we recommend dynamically assigning IP addresses via automation, rather than fixing IP address to specific values (this is what the cloud is all about). We will typically use IP addresses in range 10.0.0.0/255.255.244.0 for the first subnet, 10.0.1.0/255.255.244.0 in the second subnet, and so on. 100.x.x.x/255.0.0.0 denote publicly routable IPs (either Elastic IPs or Public IPs in AWS). Option 1: HA Cluster in a single AZ Benefits: Traditional HA. If a BIG-IP fails, service is "preserved". Tradeoffs: No HA across Datacenters/AZs. Like single DC deployment, if the AZ in which your architecture is deployed goes down, the entire service goes down. HA Summary: Single device failure = heartbeat timeout (approx. 3 sec) + API call (7-12 sec) AZ failure = entire deployment As mentioned, this approach provides the closest analogue to a traditional BIG-IP deployment in a datacenter. Because we don’t see the benefits AWS availability zones in this deployment, this architecture might make most sense when your AWS deployment acts as a disaster recovery site. A question when examining this architecture might be: “What if we put a cluster in each AZ?” Option 2: Clusters/HA pair in each AZ Benefits: Smallest service impact for either a device failure or an AZ failure. Shared DB backend but still provides DC/AZ redundancy Similar to multiple DC deployment, generally provides Active/Active capacity. Tradeoffs: Cost: both pairs are located in a single region. Pairs are traditionally reserved for "geo/region" availability Extra dependency and cost of DNS/GSLB. Management overhead of maintaining configurations and policies of two separate systems (although this problem might be easily handled via orchestration) HA Summary: Single device failure = heartbeat timeout (approx. 3 sec) + API call (7-12 sec) for 1/2 Traffic AZ failure = DNS/GSLB timeout for 1/2 traffic The above model provides a very high level of redundancy. For this reason, it seems to make most sense when incorporated into shared-service or multi-tenant models. The model also begs the question, can we continue to scale out across AZs, and can we do so for applications that do not require that the ADC manage state (e.g. no sticky sessions)? This leads us to our next approach. Option 3: Standalones in each AZ Benefits: Cost Leverage availability zone concepts Similar to multiple DC deployment, Active/Active generally adds capacity Easiest to scale Tradeoffs: Management overhead of maintaining configuration and policies across two or more separate systems; application state is not shared across systems within a geo/region Requires DNS/GSLB even though not necessarily "geo-region" HA Best suited for inbound traffic For outbound use case: you have the distributed gateway issue (i.e. who will be the gateway, how will device/instance failure be handled, etc.) SNAT required (return traffic needs to return to originating device). For Internal LB model: DNS required to distribute traffic between each AZ VIP. HA Summary: Single device failure = DNS/GSLB timeout for 1/(N Devices) traffic.. AZ failure = DNS/GSLB timeout for 1/(N Devices) traffic One of the common themes between options 2 and 3 is that orchestration is required to manage the configuration across devices. In general, the problem is that the network objects (which are bound to layer 3 addresses) cannot be shared due to differing underlying subnets. Summary Above, a number of options for deploying BIG-IP in highly available or horizontally-scaled models were discussed. The path you take will depend on your application needs. For example, if you have an application that requires persistent connections, you'll want to leverage one of the architectures which leverage device clustering and an Active/Standby approach. If persistence is managed within your application, you might aim to try one of the horizontally scalable models. Some of the deployment models we discussed are better enabled by the use of configuration management tools to manage the configuration objects across multiple BIG-IPs. In the next article we'll walk through how the lifecycle of BIG-IP and network services can be fully automated using open-source tools in AWS. These examples will show the power of using the iControlSoap and iControlREST APIs to automate your network.3.4KViews1like1CommentF5 in AWS Part 5 - Cloud-init, Single-NIC, and Auto Scale Out in BIG-IP
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following article covers features and examplesin the 12.1 AWS Marketplace release, discussed in the following documentation: Amazon Web Services: Single NIC BIG-IP VE Amazon Web Services: Auto Scaling BIG-IP VE You can find the BIG-IP Hourly and BYOL releases in the Amazon marketplace here. BIG-IP utility billing images are available, which makes it a great time to talk about some of the functionality. So far in Chris’s series, we have discussed some of the highly-available deployment footprints of BIG-IP in AWS and how these might be orchestrated. Several of these footprints leverage BIG-IP's Device Service Clustering (DSC) technology for configuration management across devices and also lend themselves to multi-app or multi-tenant configurations in a shared-service model. But what if you want to deploy BIG-IP on a per-app or per-tenant basis, in a horizontally scalable footprint that plays well with the concepts of elasticity and immutability in cloud? Today we have just the option for you. Before highlighting these scalable deployment models in AWS, we need to cover cloud-init and single-NIC configurations; two important additions to BIG-IP that enable an Auto Scaling topology. Elasticity Elastiity is obviously one of the biggest promises/benefits of cloud. By leveraging cloud, we are essentially tapping into the "unlimited" (at least relative to our own datacenters) resources large cloud providers have built. In actual practice, this means adopting new methodologies and approaches to truely deliver this. Immutablity In traditional operational model of datacenters, everything was "actively" managed. Physical Infrastructure still tends to lend itself to active management but even virtualized services and applications running on top of the infrastructure were actively managed. For example, servers were patched, code was live upgraded inplace, etc. However, to achieve true elasticity, where things are spinning up or down and more ephemeral in nature, it required a new approach. Instead of trying to patch or upgrade individual instances, the approach was treating them as disposable which meant focusing more on the build process itself. ex. Netflix's famous Building with Legos approach. Yes, the concept of golden images/snapshots existed since virtualization but cloud, with self-service, automation and auto scale functionality, forced this to a new level. Operations focus shifted more towards a consistent and repeatable "build" or "packaging" effort, with final goal of creating instances that didn't need to be touched, logged into, etc. In the specific context of AWS's Auto Scale groups, that means modifying the Auto Scale Group's "launch config". The steps for creating the new instances involve either referencing an entirely new image ID or maybe modification to a cloud-init config. Cloud-init What is it? First, let’s talk about cloud-init as it is used with most Linux distributions. Most of you who are evaluating or operating in the cloud have heard of it. For those who haven’t, cloud-init is an industry standard for bootstrapping machines at startup. It provides a simple domain specific language for common infrastructure provisioning tasks. You can read the official docs here. For the average linux or systems engineer, cloud-init is used to perform tasks such as installing a custom package, updating yum repositories or installing certificates to perform final customizations to a "base" or “golden” image. For example, the Security team might create an approved hardened base image and various Dev teams would use cloud-init to customize the image so it booted up with an ‘identity’ if you will – an application server with an Apache webserver running or a database server with MySQL provisioned. Let’s start with the most basic "Hello World" use of cloud-init, passing in User Data (in this case a simple bash script). If launching an instance via the AWS Console, on the Configure Instance page, navigate down the “Advanced Details”: Figure 1: User Data input field - bash However, User Data is limited to < 16KBs and one of the real powers of cloud-init came from extending functionality past this boundry and providing a standardized way to provision, or ah humm, "initialize" instances.Instead of using unwieldy bash scripts that probed whether it was an Ubuntu or a Redhat instance and used this OS method or that OS method (ex. use apt-get vs. rpm) to install a package, configure users, dns settings, mount a drive, etc. you could pass a yaml file starting with #cloud-config file that did a lot of this heavy lifting for you. Figure 2: User Data input field - cloud-config Similar to one of the benefits of Chef, Puppet, Salt or Ansible, it provided a reliable OS or distribution abstraction but where those approaches require external orchestration to configure instances, this was internally orchestrated from the very first boot which was more condusive to the "immutable" workflow. NOTE: cloud-init also compliments and helps boot strap those tools for more advanced or sophisticated workflows (ex. installing Chef/Puppet to keep long running non-immutable services under operation/management and preventing configuration drift). This brings us to another important distinction. Cloud-init originated as a project from Canonical (Ubuntu) and was designed for general purpose OSs. The BIG-IP's OS (TMOS) however is a highly customized, hardened OS so most of the modules don't strictly apply. Much of the Big-IP's configuration is consumed via its APIs (TMSH, iControl REST, etc.) and stored in it's database MCPD. We can still achieve some of the benefits of having cloud-init but instead, we will mostly leverage the simple bash processor. So when Auto Scaling BIG-IPs, there are a couple of approaches. 1) Creating a custom image as described in the official documentation. 2) Providing a cloud-init configuration This is a little lighter weight approach in that it doesn't require the customization work above. 3) Using a combination of the two, creating a custom image and leveraging cloud-init. For example, you may create a custom image with ASM provisioned, SSL certs/keys installed, and use cloud-init to configure additional environment specific elements. Disclaimer: Packaging is an art, just look at the rise of Docker and new operating systems. Ideally, the more you bake into the image upfront, the more predictable it will be and faster it deploys. However, the less you build-in, the more flexible you can be. Things like installing libraries, compiling, etc. are usually worth building in the image upfront. However, the BIG-IP is already a hardened image and things like installing libraries is not something required or recommended so the task is more about addressing the last/lighter weight configuration steps. However, depending on your priorities and objectives,installing sensitive keying material, setting credentials, pre-provisioning modules, etc. might make good candidates for investing in building custom images. Using Cloud-init with CloudFormation templates Remember when we talked about how you could use CloudFormation templates in a previous post to setup BIG-IP in a standard way? Because the CloudFormation service by itself only gave us the ability to lay down the EC2/VPC infrastructure, we were still left with remaining 80% of the work to do; we needed an external agent or service (in our case Ansible) to configure the BIG-IP and application services. Now, with cloud-init on the BIG-IP (using version 12.0 or later), we can perform that last 80% for you. Using Cloud-init with BIG-IP As you can imagine, there’s quite a lot you can do with just that one simple bash script shown above. However, more interestingly, we also installed AWS’s Cloudformation Helper scripts. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html to help extend cloud-init and unlock a larger more powerful set of AWS functionality. So when used with Cloudformation, our User Data simply changes to executing the following AWS Cloudformation helper script instead. "UserData": { "Fn::Base64": { "Fn::Join": [ "", [ "#!/bin/bash\n", "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s ", { "Ref": "AWS::StackId" }, " -r ", "Bigip1Instance", " --region ", { "Ref": "AWS::Region" }, "\n" ] ] } } This allows us to do things like obtaining variables passed in from Cloudformation environment, grabbing various information from the metadata service, creating or downloading files, running particular sequence of commands, etc. so once BIG-IP has finishing running, our entire application delivery service is up and running. For more information, this page discusses how meta-data is attached to an instance using CloudFormation templates: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-commands. Example BYOL and Utility CloudFormation Templates We’ve posted several examples on github to get you started. https://github.com/f5networks/f5-aws-cloudformation In just a few short clicks, you can have an entire BIG-IP deployment up and running. The two examples belowwill launch an entire reference stack complete with VPCs, Subnets, Routing Tables, sample webserver, etc. andshow the use of cloud-init to bootstrap a BIG-IP. Cloud-init is used to configure interfaces, Self-IPs, database variables, a simple virtual server, and in the case of of the BYOL instance, to license BIG-IP. Let’s take a closer look at the BIG-IP resource created in one of these to see what’s going on here: "Bigip1Instance ": { "Metadata ": { "AWS::CloudFormation::Init ": { "config ": { "files ": { "/tmp/firstrun.config ": { "content ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", "HOSTNAME=`curl http://169.254.169.254/latest/meta-data/hostname`\n ", "TZ='UTC'\n ", "BIGIP_ADMIN_USERNAME=' ", { "Ref ": "BigipAdminUsername " }, "'\n ", "BIGIP_ADMIN_PASSWORD=' ", { "Ref ": "BigipAdminPassword " }, "'\n ", "MANAGEMENT_GUI_PORT=' ", { "Ref ": "BigipManagementGuiPort " }, "'\n ", "GATEWAY_MAC=`ifconfig eth0 | egrep HWaddr | awk '{print tolower($5)}'`\n ", "GATEWAY_CIDR_BLOCK=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${GATEWAY_MAC}/subnet-ipv4-cidr-block`\n ", "GATEWAY_NET=${GATEWAY_CIDR_BLOCK%/*}\n ", "GATEWAY_PREFIX=${GATEWAY_CIDR_BLOCK#*/}\n ", "GATEWAY=`echo ${GATEWAY_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+1 }'`\n ", "VPC_CIDR_BLOCK=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${GATEWAY_MAC}/vpc-ipv4-cidr-block`\n ", "VPC_NET=${VPC_CIDR_BLOCK%/*}\n ", "VPC_PREFIX=${VPC_CIDR_BLOCK#*/}\n ", "NAME_SERVER=`echo ${VPC_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+2 }'`\n ", "POOLMEM=' ", { "Fn::GetAtt ": [ "Webserver ", "PrivateIp " ] }, "'\n ", "POOLMEMPORT=80\n ", "APPNAME='demo-app-1'\n ", "VIRTUALSERVERPORT=80\n ", "CRT='default.crt'\n ", "KEY='default.key'\n " ] ] }, "group ": "root ", "mode ": "000755 ", "owner ": "root " }, "/tmp/firstrun.utils ": { "group ": "root ", "mode ": "000755 ", "owner ": "root ", "source ": "http://cdn.f5.com/product/templates/utils/firstrun.utils " } "/tmp/firstrun.sh ": { "content ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", ". /tmp/firstrun.config\n ", ". /tmp/firstrun.utils\n ", "FILE=/tmp/firstrun.log\n ", "if [ ! -e $FILE ]\n ", " then\n ", " touch $FILE\n ", " nohup $0 0<&- &>/dev/null &\n ", " exit\n ", "fi\n ", "exec 1<&-\n ", "exec 2<&-\n ", "exec 1<>$FILE\n ", "exec 2>&1\n ", "date\n ", "checkF5Ready\n ", "echo 'starting tmsh config'\n ", "tmsh modify sys ntp timezone ${TZ}\n ", "tmsh modify sys ntp servers add { 0.pool.ntp.org 1.pool.ntp.org }\n ", "tmsh modify sys dns name-servers add { ${NAME_SERVER} }\n ", "tmsh modify sys global-settings gui-setup disabled\n ", "tmsh modify sys global-settings hostname ${HOSTNAME}\n ", "tmsh modify auth user admin password \ "'${BIGIP_ADMIN_PASSWORD}'\ "\n ", "tmsh save /sys config\n ", "tmsh modify sys httpd ssl-port ${MANAGEMENT_GUI_PORT}\n ", "tmsh modify net self-allow defaults add { tcp:${MANAGEMENT_GUI_PORT} }\n ", "if [[ \ "${MANAGEMENT_GUI_PORT}\ " != \ "443\ " ]]; then tmsh modify net self-allow defaults delete { tcp:443 }; fi \n ", "tmsh mv cm device bigip1 ${HOSTNAME}\n ", "tmsh save /sys config\n ", "checkStatusnoret\n ", "sleep 20 \n ", "tmsh save /sys config\n ", "tmsh create ltm pool ${APPNAME}-pool members add { ${POOLMEM}:${POOLMEMPORT} } monitor http\n ", "tmsh create ltm policy uri-routing-policy controls add { forwarding } requires add { http } strategy first-match legacy\n ", "tmsh modify ltm policy uri-routing-policy rules add { service1.example.com { conditions add { 0 { http-uri host values { service1.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 1 } }\n ", "tmsh modify ltm policy uri-routing-policy rules add { service2.example.com { conditions add { 0 { http-uri host values { service2.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 2 } }\n ", "tmsh modify ltm policy uri-routing-policy rules add { apiv2 { conditions add { 0 { http-uri path starts-with values { /apiv2 } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 3 } }\n ", "tmsh create ltm virtual /Common/${APPNAME}-${VIRTUALSERVERPORT} { destination 0.0.0.0:${VIRTUALSERVERPORT} mask any ip-protocol tcp pool /Common/${APPNAME}-pool policies replace-all-with { uri-routing-policy { } } profiles replace-all-with { tcp { } http { } } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled }\n ", "tmsh save /sys config\n ", "date\n ", "# typically want to remove firstrun.config after first boot\n ", "# rm /tmp/firstrun.config\n " ] ] }, "group ": "root ", "mode ": "000755 ", "owner ": "root " } }, "commands ": { "b-configure-Bigip ": { "command ": "/tmp/firstrun.sh\n " } } } } }, "Properties ": { "ImageId ": { "Fn::FindInMap ": [ "BigipRegionMap ", { "Ref ": "AWS::Region " }, { "Ref ": "BigipPerformanceType " } ] }, "InstanceType ": { "Ref ": "BigipInstanceType " }, "KeyName ": { "Ref ": "KeyName " }, "NetworkInterfaces ": [ { "Description ": "Public or External Interface ", "DeviceIndex ": "0 ", "NetworkInterfaceId ": { "Ref ": "Bigip1ExternalInterface " } } ], "Tags ": [ { "Key ": "Application ", "Value ": { "Ref ": "AWS::StackName " } }, { "Key ": "Name ", "Value ": { "Fn::Join ": [ " ", [ "BIG-IP: ", { "Ref ": "AWS::StackName " } ] ] } } ], "UserData ": { "Fn::Base64 ": { "Fn::Join ": [ " ", [ "#!/bin/bash\n ", "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s ", { "Ref ": "AWS::StackId " }, " -r ", "Bigip1Instance ", " --region ", { "Ref ": "AWS::Region " }, "\n " ] ] } } }, "Type ": "AWS::EC2::Instance " } Above may look like a lot at first but high level, we start by creating some files "inline" as well as “sourcing” some files from a remote location. /tmp/firstrun.config - Here we create a file inline, laying down variables from the Cloudformation Stack deployment itself and even the metadata service (http://169.254.169.254/latest/meta-data/). Take a look at the “Ref” stanzas. When this file is laid down on the BIG-IP disk itself, those variables will be interpolated and contain the actual contents. The idea here is to try to keep config and execution separate. /tmp/firstrun.utils – These are just some helper functions to help with initial provisioning. We use those to determine when the BIG-IP is ready for this particular configuration (ex. after a licensing or provisioning step). Note that instead of creating the file inline like the config file above, we simply “source” or download the file from a remote location. /tmp/firstrun.sh – This file is created inline as well and where it really all comes together. The first thing we do is load config variables from the firstrun.conf file and load the helper functions from firstrun.utils. We then create separate log file (/tmp/firstrun.log) to capture the output of this particular script. Capturing the output of these various commands just helps with debugging runs. Then we run a function called checkF5Ready (that we loaded from that helper function file) to make sure BIG-IP’s database is up and ready to accept a configuration. The rest may look more familiar and where most of the user customization takes place. We use variables from the config file to configure the BIG-IP using familiar methods like TMSH and iControl REST. Technically, you could lay down an entire config file (like SCF) and load it instead. We use tmsh here for simplicity. The possibilities are endless though. Disclaimer: the specific implementation above will certainly be optimized and evolve but the most important take away is we can now leverage cloud-init and AWS's helper libraries to help bootstrap the BIG-IP into a working configuration from the very first boot! Debugging Cloud-init What if something goes wrong? Where do you look for more information? The first place you might look is in various cloud-init logs in /var/log (cloud-init.log, cfn-init.log, cfn-wire.log): Below is an example for the CFTs below: [admin@ip-10-0-0-205:NO LICENSE:Standalone] log # tail -150 cfn-init.log 2016-01-11 10:47:59,353 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.us-east-1.amazonaws.com 2016-01-11 10:47:59,353 [DEBUG] Describing resource BigipEc2Instance in stack arn:aws:cloudformation:us-east-1:452013943082:stack/as-testing-byol-bigip/07c962d0-b893-11e5-9174-500c217b4a62 2016-01-11 10:47:59,782 [DEBUG] Not setting a reboot trigger as scheduling support is not available 2016-01-11 10:47:59,790 [INFO] Running configSets: default 2016-01-11 10:47:59,791 [INFO] Running configSet default 2016-01-11 10:47:59,791 [INFO] Running config config 2016-01-11 10:47:59,792 [DEBUG] No packages specified 2016-01-11 10:47:59,792 [DEBUG] No groups specified 2016-01-11 10:47:59,792 [DEBUG] No users specified 2016-01-11 10:47:59,792 [DEBUG] Writing content to /tmp/firstrun.config 2016-01-11 10:47:59,792 [DEBUG] No mode specified for /tmp/firstrun.config 2016-01-11 10:47:59,793 [DEBUG] Writing content to /tmp/firstrun.sh 2016-01-11 10:47:59,793 [DEBUG] Setting mode for /tmp/firstrun.sh to 000755 2016-01-11 10:47:59,793 [DEBUG] Setting owner 0 and group 0 for /tmp/firstrun.sh 2016-01-11 10:47:59,793 [DEBUG] Running command b-configure-BigIP 2016-01-11 10:47:59,793 [DEBUG] No test for command b-configure-BigIP 2016-01-11 10:47:59,840 [INFO] Command b-configure-BigIP succeeded 2016-01-11 10:47:59,841 [DEBUG] Command b-configure-BigIP output: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 40 0 40 0 0 74211 0 --:--:-- --:--:-- --:--:-- 40000 2016-01-11 10:47:59,841 [DEBUG] No services specified 2016-01-11 10:47:59,844 [INFO] ConfigSets completed 2016-01-11 10:47:59,851 [DEBUG] Not clearing reboot trigger as scheduling support is not available [admin@ip-10-0-0-205:NO LICENSE:Standalone] log # If trying out the example templates above, you can inspect the various files mentioned. Ex. In addition to checking for their general presence: /tmp/firstrun.config= make sure variables were passed as you expected. /tmp/firstrun.utils= Make sure exists and was downloaded /tmp/firstrun.log = See if any obvious errors were outputted. It may also be worth checking AWS Cloudformation Console to make sure you passed the parameters you were expecting. Single-NIC Another one of the important building blocks introduced with 12.0 on AWS and Azure Virtual Editions is the ability to run BIG-IP with just a single network interface. Typically, BIG-IPs were deployed in a multi-interface model, where interfaces were attached to an out-of-band management network and one or more traffic (or "data-plane") networks. But, as we know, cloud architectures scale by requiring simplicity, especially at the network level. To this day, some clouds can only support instances with a single IP on a single NIC. In AWS’s case, although they do support multiple NIC/multiple IP, some of their services like ELB only point to first IP address of the first NIC. So this Single-NIC configuration makes it not only possible but also dramatically easier to deploy in these various architectures. How this works: We can now attach just one interface to the instance and BIG-IP will start up, recognize this, use DHCP to configure the necessary settings on that interface. Underneath the hood, the following DB keys will be set: admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nic one-line sys db provision.1nic { value "enable" } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nicautoconfig one-line sys db provision.1nicautoconfig { value "enable" } provision.1nic = allows both management and data-plane to use the same interface provision.1nicautoconfig = uses address from DHCP to configure a vlan, Self-IP and default gateway. Ex. network objects automatically configured admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net vlan net vlan internal if-index 112 interfaces { 1.0 { } } tag 4094 } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net self net self self_1nic { address 10.0.1.65/24 allow-service { default } traffic-group traffic-group-local-only vlan internal } admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net route net route default { gw 10.0.1.1 network default } Note: Traffic Management Shell and the Configuration Utility (GUI) are still available on ports 22 and 443 respectively. If you want to run the management GUI on a higher port (for instance if you don’t have the BIG-IPs behind a Port Address Translation service (like ELB) and want to run an HTTPS virtual on 443), use the following commands: tmsh modify sys httpd ssl-port 8443 tmsh modify net self-allow defaults add { tcp:8443 } tmsh modify net self-allow defaults delete { tcp:443 } WARNING: Now that management and dataplane run on the same interface, make sure to modify your Security Groups to restrict access to SSH and whatever port you use for the Mgmt GUI port to trusted networks. UPDATE: On Single-Nic, Device Service Clustering currently only supports Configuration Syncing (Network Failover is restricted for now due to BZ-606032). In general, the single-NIC model lends itself better to single-tenant or per-app deployments, where you need the advanced services from BIG-IP like content routing policies, iRules scripting, WAF, etc. but don’t necessarily care for maintaining a management subnet in the deployment and are just optimizing or securing a single application. By single tenant we also mean single management domain as you're typically running everything through a single wildcard virtual (ex. 0.0.0.0/0, 0.0.0.0/80, 0.0.0.0/443, etc.) vs. giving each tenant its own Virtual Server (usually with its own IP and configuration) to manage. However, you can still technically run multiple applications behind this virtual with a policy or iRule, where you make traffic steering decisions based on L4-L7 content (SNI, hostname headers, URIs, etc.). In addition, if the BIG-IPs are sitting behind a Port Address Translation service, it also possible to stack virtual services on ports instead. Ex. 0.0.0.0:444 = Virtual Service 1 0.0.0.0:445 = Virtual Service 2 0.0.0.0:446 = Virtual Service 3 We’ll let you get creative here…. BIG-IP Auto Scale Finally, the last component of Auto Scaling BIG-IPs involves building scaling policies via Cloudwatch Alarms. In addition to the built-in EC2 metrics in CloudWatch, BIG-IP can report its own set of metrics, shown below: Figure 3: Cloudwatch metrics to scale BIG-IPs based on traffic load. This can be configured with the following TMSH commands on any version 12.0 or later build: tmsh modify sys autoscale-group autoscale-group-id ${BIGIP_ASG_NAME} tmsh load sys config merge file /usr/share/aws/metrics/aws-cloudwatch-icall-metrics-config These commands tell BIG-IP to push the above metrics to a custom “Namespace” on which we can roll up data via standard aggregation functions (min, max, average, sum). This namespace (based on Auto Scale group name) will appear as a row under in the “Custom metrics” dropdown on the left side-bar in CloudWatch (left side of Figure 1). Once these BIG-IP or EC2 metrics have been populated, CloudWatch alarms can be built, and these alarms are available for use in Auto Scaling policies for the BIG-IP Auto Scale group. (Amazon provides some nice instructions here). Auto Scaling Pool Members and Service Discovery If you are scaling your ADC tier, you are likely also going to be scaling your application as well. There are two options for discovering pool members in your application's auto scale group. 1) FQDN Nodes For example, in a typicalsandwichdeployment, your application members might also be sitting behind an internal ELB so you would simply point your FQDN node at the ELB's DNS. For more information, please see: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-implementations-12-1-0/25.html?sr=56133323 2) BIG-IP's AWS Auto Scale Pool Member Discovery feature (introduced v12.0) This feature polls the Auto Scale Group via API calls and populates the pool based on its membership. For more information, please see: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-autoscaling-amazon-ec2-12-1-0/2.html Putting it all together The high-level steps for Auto Scaling BIG-IP include the following: Optionally* creating an ElasticLoadBalancer group which will direct traffic to BIG-IPs in your Auto Scale group once they become operational. Otherwise, will need Global Server Load Balancing (GSLB). Creating a launch configuration in EC2 (referencing either custom image id and/or using Cloud-init scripts as described above) Creating an Auto Scale group using this launch configuration Creating CloudWatch alarms using the EC2 or custom metrics reported by BIG-IP. Creating scaling policies for your BIG-IP Auto Scale group using the alarms above. You will want to create both scale up and scale down policies. Here are some things to keep in mind when Auto Scaling BIG-IP ● BIG-IP must run in a single-interface configuration with a wildcard listener (as we talked about earlier). This is required because we don't know what IP address the BIG-IP will get. ● Auto Scale Groups themselves consist of utility instances of BIG-IP ● The scale up time for BIG-IP is about 12-20 minutes depending on what is configured or provisioned. While this may seem like a long-time, creating the right scaling policies (polling intervals, thresholds, unit of scale) make this a non-issue. ● This deployment model lends itself toward the themes of stateless, horizontal scalability and immutability embraced by the cloud. Currently, the config on each device is updated once and only once at device startup. The only way to change the config will be through the creation of a new image or modification to the launch configuration. Stay tuned for a clustered deployment which is managed in a more traditional operational approach. If interesting in exploring Auto Scale further, we have put together some examples of scaling the BIG-IP tier here: https://github.com/f5devcentral/f5-aws-autoscale * Above github repository also provides some examples of incorporating BYOL instances in the deployment to help leverage BYOL instances for your static load and using Auto Scale instances for your dynamic load. See the READMEs for more information. CloudFormation templates with Auto Scaled BIG-IP and Application Need some ideas on how you might leverage these solutions? Now that you can completely deploy a solution with a single template, how about building service catalog items for your business units that deploy different BIG-IP services (LB, WAF) or a managed service build on top of AWS that can be easily deployed each time you on-board a new customer?2.6KViews0likes7CommentsF5 in AWS Part 4 - Orchestrating BIG-IP Application Services with Open-Source tools
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following post references code hosted at F5's Github repository f5networks/aws-deployments. This code provides a demonstration of using open-source tools to configure and orchestrate BIG-IP. Full documentation for F5 BIG-IP cloud work can be found at Cloud Docs: F5 Public Cloud Integrations. So far we have talked above AWS networking basics, how to run BIG-IP in a VPC, and highly-available deployment footprints. In this post, we’ll move on to my favorite topic, orchestration. By this point, you probably have several VMs running in AWS. You’ve lost track of which configuration is setup on which VM, and you have found yourself slowly going mad as you toggle between the AWS web portal and several SSH windows. I call this ‘point-and-click’ purgatory. Let's be blunt, why would you move to cloud without realizing the benefits of automation, of which cloud is a large enabler. If you remember our second article, we mentioned CloudFormation templates as a great way to deploy a standardized set of resources (perhaps BIG-IP + the additional virtualized network resources) in EC2. This is a great start, but we need to configure these resources once they have started, and we need a way to define and execute workflows which will run across a set of hosts, perhaps even hosts which are external to the AWS environment. Enter the use of open-source configuration management and workflow tools that have been popularized by the software development community. Open-source configuration management and AWS APIs Lately, I have been playing with Ansible, which is a python-based, agentless workflow engine for IT automation. By agentless, I mean that you don’t need to install an agent on hosts under management. Ansible, like the other tools, provides a number of libraries (or “modules”) which provide the ability to manage a diverse collection of remote systems. These modules are typically implemented through the use of API calls, often over HTTP. Out of the box, Ansible comes with several modules for managing resources in AWS. While the EC2 libraries provided are useful for basic orchestration use cases, we decided it would be easier to atomically manage sets of resources using the CloudFormation module. In doing so, we were able to deploy entire CloudFormation stacks which would include items like VPCs, networking elements, BIG-IP, app servers, etc. Underneath the covers, the CloudFormation: Ansible module and our own project use the python module to interact with AWS service endpoints. Ansible provides some basic modules for managing BIG-IP configuration resources. These along with libraries for similar tools can be found here: Ansible Puppet SaltStack In the rest of this post, I’ll discuss some work colleagues and I have done to automate BIG-IP deployments in AWS using Ansible. While we chose to use Ansible, we readily admit that Puppet, Chef, Salt and whatever else you use are all appropriate choices for implementing deployment and configuration management workflows for your network. Each have their upsides and downsides, and different tools may lend themselves to different use cases for your infrastructure. Browse the web to figure out which tool is right for you. Using Standardized BIG-IP Interfaces Speaking of APIs, for years F5 has provided the ability to programmatically configure BIG-IP using iControlSOAP. As the audiences performing automation work have matured, so have the weapons of choice. The new hot ticket is REST (Representational State Transfer), and guess what, BIG-IP has a REST interface (you can probably figure out what it is called). Together, iControlSOAP and iControlREST give you the power to manage nearly every configuration element and feature of BIG-IP. These interfaces become extremely powerful when you combine them with your favorite open-source configuration management tool and a cloud that allows you to spin up and down compute and networking resources. In the project described below, we have also made use of iApps using iControlRest as a way to create a standard virtual server configuration with the correct policies and profiles. The documentation in Github describes this in detail, but our approach shows how iApps provide a strongly supported approach for managing network policy across engineering teams. For example, imagine that a team of software engineers has written a framework to deploy applications. You can package the network policy into iApps for various types of apps, and pass these to the teams writing the deployment framework. Implementing a Service Catalog To pull the above concepts together, a colleague and I put together the aws-deployments project.The goal was to build a simple service catalog which would enable a user to deploy a containerized application in EC2 with BIG-IP network services sitting in front. This is example code that is not supported by F5 support but is a proof of concept to show how you can fully automate production-like deployments in AWS. Some highlights of the project include: Use of iControlRest and iControlSoap within Ansible playbooks to setup advanced topologies of BIG-IP in AWS. Automated deployment of a basic ASM web application firewall policy to protect a vulnerable web app (Hackazon. Use of iApps to manage virtual server configurations, including the WAF policy mentioned above. Figure 1 - Generic Architecture for automating application deployments in public or private cloud In examination of the code, you will see that we provide the opportunity to provision all the development models outlined in our earlier post (a single standalone VE, standalones BIG-IP VEs striped availability zones, clusters within an availability zone, etc). We used Ansible and the interfaces on BIG-IP to orchestrate the workflows assoiated with these deployment models. To perform the clustering step, we have used the iControlSoap interface on BIG-IP. The final set of technology used is depicted in Figure 3. Figure 2 - Technologies used in the aws-deployments project on Github Read the Code and Test It Yourself All the code I have mentioned is available at f5networks/aws-deployments. We encourage you to download and run the code for yourself. Instructions for setting up a development environment which includes the necessary dependencies is easy. We have packaged all the dependencies for use with either Vagrant or Docker as development tools. The instructions for either of these approaches can be found in the README.md or in the /docs directory. The following video shows an end-to-end usage example. (Keep in mind that the code has been updated since this video was produced). At the end of the day, our goal for this work was to collect customer feedback. Please provide some by leaving a comment below, or by filing ‘pull requests’ or ‘issues’ in Github. In the next few weeks, we will be updating the project to include the Hackazon app mentioned above, show how to cluster BIG-IP across availability zones, and how to deploy an ASM profile with an iApp. Have fun!1.3KViews1like3CommentsBIG-IP Virtual Edition Configuration Utility log in failures
I have just installed my BIG-IP Virtual Edition trial and I am having trouble logging in to the Configuration Utility. I have configured the VM and can connect my browser to the Configuration Utility and I get the login screen, but it keeps telling me there is an Authentication problem. I get Authentication required! This server could not verify that you are authorized to access the URL "/tmui/logmein.html" .... ... I've tried the default login and password of 'admin' and 'admin' and 'admin' and 'default' I've see both pairs mentioned online but neither works for me. What am I missing?899Views0likes7CommentsSecure Your New AWS Application with an F5 Web Application Firewall: Part 2 of 4
In Part 1 of our series, we used a CloudFormation Template (CFT) to create a repeatable deployment of our application in AWS. Our app is running in the cloud, our users are connecting to it, and we’re serving traffic. But more importantly, we’re selling our products. However, after a bad experience with our application falling down, we realized the hard way that it's not secure anymore. The challenge Our app in the cloud is getting hacked The solution Add a scalable web application firewall (WAF) In the data center, we had edge security measures that protected our application. In the cloud, we no longer have this. We’re now vulnerable to attacks. This doesn’t mean that Amazon is not a secure cloud environment; it means that we need to secure our application and its data in the cloud. With a little research, we found that Amazon has a shared responsibility model for security. They take responsibility “of” the cloud, and as an organization hosting an application in AWS, we’re responsible for security “in” the cloud. For more information, see https://aws.amazon.com/compliance/shared-responsibility-model/. In our last article, we showed a fairly simple setup in AWS. Now, to secure our application, we’re going to add a BIG-IP VE web application firewall (WAF) cluster. Not only will this secure our application, but it takes advantage of AWS Auto Scaling, addingmore BIG-IP VE instances when traffic or CPU load requires it. To create and configure this Auto Scaling WAF, F5 provides a CloudFormation template. This template and others are available on GitHub. This CFT assumes that you already have a VPC with multiple subnets, each in a different availability zone. If you ran our CFT from last week, you should have this already. You must also create a classic AWS ELB that will go in front of the BIG-IP VE instances. This ELB should listen on port 80 and have a health check for TCP port 8443, like this: The ELB should also have a security group associated with it. This group should have the following Inbound ports open: 22 (for SSH access to BIG-IP VE), 8443 (for the BIG-IP VE Configuration utility), and 80 (for the web app). Before you deploy the template, gather this information: The AWS ELB name (the one that will go in front of the BIG-IP VEs), for example, BIGIPELB. The VPC, subnet, and security group names/IDs The DNS name for the ELB in front of the app servers, for example:Test-StackELB-55UMG84080MI-342616460.us-east-2.elb.amazonaws.com. When you deploy the template, an auto scaling group, launch configuration, and BIG-IP VE instance are created. You can connect to the website by using the BIG-IP ELB address, for example: http://bigipelb-1631946395.us-east-2.elb.amazonaws.com/. The ID of the server you're connected to is displayed on the top menu bar. If you want to access BIG-IP VE, you can use SSH to connect to the instance. Then you can set the admin password (tmsh modify auth password admin), and connect to the BIG-IP VE Configuration utility (https://PublicIP:8443). The BIG-IP VE instances that make up the WAF cluster are licensed hourly, and they automatically license themselves when they are launched. They come in different throughput limits. We’re testing right now, so we’re going to start with a 25 Mbps image on a small AWS instance type (2 vCPU, 4 G memory). Later, when we go to production, we can update the throughput and AWS instance type. Maintenance of the WAF Cluster The challenge Over time, the WAF cluster needs updates The solution Update the CloudFormation stack without bringing down the cluster You’ve got your Auto Scaling WAF up and running and it’s sending notifications about traffic that it’s analyzing. When we created this deployment, we specified 25 Mbps as the throughput limit for our BIG-IP VE instances. But now we’re selling millions of packets of our hotdog-flavored lemonade and it’s time to add some resources. The good news is that you can simply re-run the CloudFormation stack and update the settings. New instances will be launched and old instances will be terminated. Traffic will continue to be processed during this time. To ensure the new BIG-IP VE instances have the same configuration as the ones you’re terminating, you must save off the BIG-IP VE configuration before you re-deploy. IS THIS REALLY POSSIBLE? Yes. This is possible. The WAF keeps running, customers keep buying, and the lemonade packets are flying out the door. For example, let’s say we want to increase BIG-IP VE throughput, the # of BIG-IP VE instances, and the AWS instance type. To do this, we: Back up the BIG-IP VE to a .ucs file Save the .ucs file to the S3 bucket that was created when we deployed the CFT Re-deploy the CFT and choose different settings. For more information, watch this video that shows how it works. Over time you can also: Upgrade to newer versions Apply hotfixes, security fixes, etc. This same process applies; you can update your running configuration without losing your changes. The bummer is, if you're a developer, you may not want to manage and maintain a WAF. Never fear, you have options! Part 3 will address this issue.668Views0likes0CommentsSSL Offload performance in F5 LTM Virtual Edition
Can anyone tell me what the performance of SSL Offload is like using F5 LTM Virtual Edition. I am trying to save costs and not purchase hardware right now, and wondered what the limitations were in the Virtual Edition. Surely, performance still scales based on VMware host hardware/Virtual Machine specs? Any advice or links to white papers appreciated :) Mark621Views0likes7CommentsVE lost communication after upgrade to ESXi 6.7 Update 2
After upgrade ESXi 6.7 from build 13004448 (last public patch before Update 2) to 13006603 (Update 2), VE 14.1.0.3 lost communication on TMM interfaces (mgmt worked fine). I did some research then. Egress traffic from TMM went correctly (another VM could see it), but ingress traffic did not fall into TMM. Tried to send burst of 10k broadcast ARP requests in VLAN with TMM interfaces connected, but with no adequate change in stats (tmsh show net interface all-properties raw) on TMM interfaces (stats of mgmt interface, connected to the same VLAN, are increased adequately). Restarting VE did not help. Restarting ESXi did not help. After rolling the ESXi back to the previous build, everything started to work. Tried once again upgrade to 6.7u2 and rollback, with the same results. Speculation: Since Update 2 release notes mention bugfix in vmxnet3, this may be related. AFAIK, TMM takes care of interfaces by itself (in linux, these are visible in lspci, but neither in ifconfig, ip link nor /sys/class/net/), so there could have been made some change in host-side vmxnet3 that is not yet reflected in TMM code. Has anyone similar experience with ESXi 6.7u2? Best regards, Ondrej614Views0likes5Commentssnmp log
Hi all, I've got a little problem with my CACTI. As soon as i activate a graph, these logs appear on the F5-LTM (VE) : warning snmpd[6961]: 010e0004:4: MCPD query response exceeding 30 seconds. Does anyone know why ? Graphs on CACTI work well. Thanks for your help Lionel599Views0likes3Comments