AWS
372 TopicsGet Started with BIG-IP and BIG-IQ Virtual Edition (VE) Trial
Welcome to the BIG-IP and BIG-IQ trials page! This will be your jumping off point for setting up a trial version of BIG-IP VE or BIG-IQ VE in your environment. As you can see below, everything you’ll need is included and organized by operating environment — namely by public/private cloud or virtualization platform. To get started with your trial, use the following software and documentation which can be found in the links below. Upon requesting a trial, you should have received an email containing your license keys. Please bear in mind that it can take up to 30 minutes to receive your licenses. Don't have a trial license?Get one here. Or if you're ready to buy, contact us. Looking for other Resourceslike tools, compatibility matrix... BIG-IP VE and BIG-IQ VE When you sign up for the BIG-IP and BIG-IQ VE trial, you receive a set of license keys. Each key will correspond to a component listed below: BIG-IQ Centralized Management (CM) — Manages the lifecycle of BIG-IP instances including analytics, licenses, configurations, and auto-scaling policies BIG-IQ Data Collection Device (DCD) — Aggregates logs and analytics of traffic and BIG-IP instances to be used by BIG-IQ BIG-IP Local Traffic Manager (LTM), Access (APM), Advanced WAF (ASM), Network Firewall (AFM), DNS — Keep your apps up and running with BIG-IP application delivery controllers. BIG-IP Local Traffic Manager (LTM) and BIG-IP DNS handle your application traffic and secure your infrastructure. You’ll get built-in security, traffic management, and performance application services, whether your applications live in a private data center or in the cloud. Select the hypervisor or environment where you want to run VE: AWS CFT for single NIC deployment CFT for three NIC deployment BIG-IP VE images in the AWS Marketplace BIG-IQ VE images in the AWS Marketplace BIG-IP AWS documentation BIG-IP video: Single NIC deploy in AWS BIG-IQ AWS documentation Setting up and Configuring a BIG-IQ Centralized Management Solution BIG-IQ Centralized Management Trial Quick Start Azure Azure Resource Manager (ARM) template for single NIC deployment Azure ARM template for threeNIC deployment BIG-IP VE images in the Azure Marketplace BIG-IQ VE images in the Azure Marketplace BIG-IQ Centralized Management Trial Quick Start BIG-IP VE Azure documentation Video: BIG-IP VE Single NIC deploy in Azure BIG-IQ VE Azure documentation Setting up and Configuring a BIG-IQ Centralized Management Solution VMware/KVM/Openstack Download BIG-IP VE image Download BIG-IQ VE image BIG-IP VE Setup BIG-IQ VE Setup Setting up and Configuring a BIG-IQ Centralized Management Solution Google Cloud Google Deployment Manager template for single NIC deployment Google Deployment Manager template for threeNIC deployment BIG-IP VE images in Google Cloud Google Cloud Platform documentation Video:Single NIC deploy inGoogle Other Resources AskF5 Github community(f5devcentral,f5networks) Tools toautomate your deployment BIG-IQ Onboarding Tool F5 Declarative Onboarding F5 Application Services 3 Extension Other Tools: F5 SDK (Python) F5 Application Services Templates (FAST) F5 Cloud Failover F5 Telemetry Streaming Find out which hypervisor versions are supported with each release of VE. BIG-IP Compatibility Matrix BIG-IQ Compatibility Matrix Do you haveany comments orquestions? Ask here66KViews8likes24CommentsBIG-IP integration with AWS Gateway Load Balancer - Overview
Introduction With the release of TMOS version 16.1,BIG-IP now supportsAWS Gateway Load Balancer(GWLB). With this integration we are making it much easier and simpler to insert BIG-IP security services into an AWS environment while maintaining high availability and supporting elastic scalability of the BIG-IP's. When to use GWLB? F5 BIG-IP delivers a wide range of application and security services. Depending on the service and other requirements the BIG-IP'sare typically deployed in one of two modes:Networkmode orProxymode. Important: GWLB is only applicabletoday with Network deployment mode. First, you should identifywhich deployment mode is relevant for you: Network(GWLB Supported) Common use cases: Network firewall, DDoS protection, Transparent WAF Flowtransparency is maintained (no source or destination NAT) Directing traffic to the BIG-IP's is done using routing by the network, making sure traffic goes back to the same BIG-IP in order to maintain traffic symmetry is also based on routing Proxy(NotGWLB Supported) Providingingress services to the application (WAF,LB, L7 DDoS, Bot protection), services are applied to an application specific virtual server on the BIG-IP The BIG-IP SNAT (source NAT) managesthe traffic to ensure that the return traffic arrives at the same BIG-IP thus maintaining traffic symmetry Directing user'straffic to the BIG-IP's is usually done using DNS. AFQDN will map to a specific virtual server on the BIG-IP. Important:GWLB doesnotsupport proxy devices in theinspection zone. Traffic flow must remain unchanged by the inspection device, for more details see:https://aws.amazon.com/blogs/networking-and-content-delivery/integrate-your-custom-logic-or-appliance-with-aws-gateway-load-balancer/). Existing challenges of BIG-IP Network deployment without GWLB Let's examine two scenarios in which we use the BIG-IP for inspecting network traffic: Ingress/Egress protection:In this scenario, we want to inspect all traffic coming into and going out of theAWS network. The security services that are most relevant for this scenario are: Firewall, DDOS protection, WAF andIPS. East west (Inter VPC / networks) protection: In this scenario we want to inspectalltraffic between VPC's and between on-prem to VPC's. Themost common security services used areFirewall and IPS. In the two scenarios mentioned above, we need to ensure the relevant flows are routed to the BIG-IP. We will also need to verify that traffic is symmetric – meaning any flow that was sent to the BIG-IP must also return fromthe same BIG-IP. Today, without GWLB,our methodof getting traffic to the BIG-IP is tomanipulate the routing tables accordingly. AWS's routing tables accepts an ENI (Elastic Network Interface, a Network interface of an EC2 instance running a BIG-IP) as the 'gateway' for a specific route. Since routing tables only accepts asingleENI, we can only send traffic to a single BIG-IP instance -which of course creates a single point of failure. We can further improve this design by leveraging F5's Cloud Failover Extension (CFE) which allows us to create an active/standby pair. TheCFE will automatically manipulate the route tables to always point traffic at the ENI of the active device. You can read more on this design here:https://clouddocs.f5.com/products/extensions/f5-cloud-failover/latest/userguide/aws.html?highlight=route In summary, the network deployment of the BIG-IP in AWS has the following challenges: Limited scale - Throughput, concurrent connections etc. as only a single device processes the traffic Unoptimized compute – Passive device sits idle Mixed admin domains – BIG-IP's are deployed in the same VPC as the workloads it is protecting, routing table changes are controlled by F5's CFE which is not always wanted or possible How GWLB works with BIG-IP To understand how AWS GWLB works, let'sstart by taking a closer look at the GWLB. We can break down the GWLB functionality into two main elements: On the frontend:GWLB presents itself as a next-hop L3 Gateway to manage traffic transparently. GWLB uses a new component:GWLB endpoint. This endpoint acts as an ENI (Elastic Network Interface) of the GWLB. The only way of sending traffic to GWLB is through this endpoint, therefore it is deployed in a VPC thatconsumesthe security services (i.e. theConsumer VPC). Admin domain separation:The GWLB and BIG-IP fleet are deployed in their own VPC (Security VPC). One GWLB can receive and inspect traffic from many VPC's (findmore detailon design patterns in the 'Deployments' section) On the backend:GWLB is like a L4 Load Balancer, very similar to NLB (Network Load Balancer) Provides traffic distribution across a fleet of BIG-IP instances. Routes flows and assignsthem to specific BIG-IP for stateful inspection. Performs health-checks of the BIG-IP's, and routes traffic to healthy BIG-IP's only. Traffic is sent over a L3 tunnel using the GENEVE protocol (Requires BIG-IP version 16.1 and above) BIG-IP integration details: Requirements: BIG-IP version 16.1 and above BIG-IP traffic flow: The BIG-IP must bedeployed inthe same VPC as the GWLB. BIG-IP receives traffic from GWLB over a GENEVE tunnel that uses the BIG-IPs private ip address. Next, we create a virtual server that is enabled on the new tunnel. The new tunnel gets the traffic from GWLB, decapsulates it and then it gets processed by the inspection virtual server. After the virtual server processes the packet and applies the definedsecurity profiles, it needs to forward the packet back to the tunnel. The way to 'Force' traffic back into the tunnel is to create a fake node that 'lives' inside the tunnel and use it as the target pool of the virtual server. Once the tunnel interface receives the packets from the virtual server, it encapsulates them with GENEVE and forwards it back to GWLB. The following diagram describes the traffic flow within a specific BIG-IP device: Considerations and known issues: Traffic flows inside the 'inspection zone' must remain unchanged. i.e, no change in the packet's 5 tuple is allowed. All virtual servers are supported as long as the 5 tuple of the packet is maintained. That might require changing some configurations, for example Disable the default 'address translation' feature of a 'standard' virtual server. Set the 'source port' setting to 'Preserve strict' Payload manipulation is supported as long as the packet's 5 tuple is maintained Some examples: HTTP rewrites AWAF blocking responses – AWAF responds with a 'blocking page' High Availability When a BIG-IP instance fails in the target group, GWLB continue to sendexistingflows to the failed instance. Onlynew connectionswill be directed to a new instance. Deploy BIG-IP instances across availability zones according to the desired AZ availability requirements. The Default behavior for GWLB is to keep traffic in the same AZ, meaning traffic that was received on an endpoint in AZ1 will only forward traffic to instances in AZ1. That behavior is controlled by the 'CrossZoneLoadBalancing' flaghttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html If you do change the 'CrossZoneLoadBalancing' flag, keep in mind that GWLB will distribute traffic across AZ's even when all instances are healthy andwill incur cross-AZtraffic costs. Flow timeout GWLB has its own protocol timeouts (350 seconds for TCP, 120 for UDP). the timeout values on the BIG-IP must be smaller then the ones on GWLB - BIG-IP uses 300 seconds for TCP and 60 for UDP by default. With the GWLB integration you can now create an elastic group of BIG-IP's that will be exposed to the rest of the AWS network using AWS native networking constructs. GWLB is responsible for distributing the traffic evenly between the group while maintaining flow affinity so that each flow will always be routed to the same BIG-IP in the group and thus solving the symmetric flows without requiring anySNAT. The security inspection zone can be deployed in its own VPC, allowing better administrative separation. Exposing the inspection service is done using VPC endpoints, those endpoints are tied to the GWLB and can be used as the target of different routing tables (TGW, Ingress, subnets). Summary The BIG-IP andGWLB integration enables the industryleading BIG-IPsecurity serviceswith the following benefits: Elastic scalability:Scale your deployment based on your actual usage. Horizontally scale your BIG-IP's. Simplified connectivity:Leverage AWS native networking constructs to 'insert' BIG-IP security services in different traffic flows (North-South, East-West) Cost effectiveness:Consolidate BIG-IP's deployments and size the deploymentbased on actual usage Next steps: Check out the next articles in this series - Ingress/Egress VPC inspection with BIG-IP and GWLB deployment pattern Ingress/Egress and inter VPC inspection with BIG-IP and GWLB deployment pattern Test the integration yourself - Check out our self-service lab that you can deploy in your own AWS account(Fully automated deployment using Terraform): https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-fw https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-inter-vpc-fw-gwlb10KViews6likes2CommentsDeploy BIG-IP in AWS with HA across AZ’s - without using EIP’s
Background: The CloudFormation templates that are provided and supported by F5 are an excellent resource for customers to deploy BIG-IP VE in AWS. Along with these templates, documentation guiding your F5 deployment in AWS is an excellent resource. And of course, DevCentral articles are helpful. I recommend reading about HA topologies in AWS to start. I hope my article today can shed more light on an architecture that will suit a specific set of requirements: No Elastic IP's (EIP’s), High Availability (HA)across AZ’s, and multiple VPC’s. Requirements behind this architecture choice: I recently had a requirement to deploy BIG-IP appliances in AWS across AZ’s. I had read the official deployment guide, but I wasn’t clear on how to achieve failover without EIP’s. I was given 3 requirements: HA across AZ’s. In this architecture, we required a pair of BIG-IP devices in Active/Standby, where each device was in a different AZ. I needed to be able to fail over between devices. No EIP’s. This requirement existed because a 3 rd party firewall was already exposed to the Internet with a public IP address. That firewall would forward inbound requests to the BIG-IP VE in AWS, which in turn proxied traffic to a pair of web servers. Therefore, there was no reason to associate an EIP (a public IP address) with the BIG-IP interface. In my demo below I have not exposed a public website through a 3 rd party firewall, but to do so is a simple addition to this demo environment. Multiple VPC’s. This architecture had to allow for multiple VPC’s in AWS. There was already a “Security VPC” which contained firewalls, BIG-IP devices, and other devices, and I had to be able to use these devices for protecting applications that were across 1 or more disparate VPC’s. Meeting the requirements: HA across AZ’s This is the easiest of the requirements to meet because F5 has already provided templates to deploy these in AWS. I personally used a 2-nic device, with a BYOL license, deployed to an existing VPC, so that meant my template was this one. After this deployment is complete, you should have a pair of devices that will sync their configuration. At time of failover The supported F5 templates will deploy with the Advanced HA iApp. It is important that you configure this iApp after you have completed your AWS deployments. The iApp uses IAM permissions deployed with the template to make API calls to AWS at the time of failover. The API calls will update the route tables that you specify within the iApp wizard. Because this iApp is installed on both devices, either device can update the route in your route tables to point to its own interface. Update as of Dec 2019 This article was first written Feb 2019, and in Dec 2019 F5 released the Cloud Failover Extension (CFE), which is a cloud-agnostic, declarative way to configure failover in multiple public clouds. You can use the CFE instead of the Advanced HA iApp to achieve high availability between BIG-IP devices in cloud. Update as of Apr 2020 Your API calls will typically be sent to the public Internet endpoints for AWS EC2 API calls. Optionally, you can use AWS VPC endpoints to keep your API calls out to AWS EC2 from traversing the public Internet. My colleague Arnulfo Hernandez has written an article explaining how to do this. No EIP’s Configure an “alien IP range” I’m recycling another DevCentral solution here. You will need to choose an IP range for your VIP network that does not fall within the CIDR that is assigned to your VPC. Let’s call it an “alien range” because it “doesn’t belong” in your VPC and you couldn’t assign IP addresses from this range to your AWS ENI’s. Despite that, now create a route table within AWS that points this “alien range” to your Active BIG-IP device’s ENI (if you’re using a 2+ nic device, point it to the correct data plane NIC, not the Mgmt. interface). Don’t forget to associate the route table with specific subnets, per your design. Alternatively, you could add this route to the default VPC route table. Create a VIP on your active device Now create a VIP on your active device and configure the IP address as an IP within your alien range. Ensure the config replicates to your standby device. Ensure that source/destination checking is disabled on the ENI’s that your AWS routes are pointing to (on both Standby and Active devices). You should now have a VIP that you can target from other hosts in your VPC, provided that the route you created above is applied to the traffic destined to the VIP. Multiple VPC’s For extra credit, we’ll set up a Transit Gateway. This will allow other VPCs to route traffic to this “alien range” also, provided that the appropriate routes exist in the remote VPC’s, and also are applied to the appropriate Transit Gateway Route Table. Again, I’m recycling ideas that have already been laid out in other DevCentral posts. I won’t re-hash how to set up a transit gateway in AWS, because you can follow the linked post above. Sufficed to say, this is what you will need to set up if you want to route between multiple VPC’s using a transit gateway: 2 or more VPC’s A transit gateway in AWS Multiple transit gateway attachments that attach the transit gateway and each VPC you would like to route between. You will need one attachment per VPC. A transit gateway route table that is associated with each attachment. I will point out that you need to add a route for your “alien range” in your transit gateway route table, and in your remote VPC’s. That way, hosts in your remote VPC’s will forward traffic destined to your alien range (VIP network) to the transit gateway, and the transit gateway will forward it to your VPC, and the route you created in Step A will forward that traffic to your active BIG-IP device. Completed design: After the above configuration, you should have an environment that looks like the diagram below: Tips Internet access for deployments: When you deploy your BIG-IP devices, they will need Internet access to pull down some resources, including the iApp. So if you are deploying devices into your existing VPC, make sure you have a reachable Internet Gateway in AWS so that the devices have Internet access through both their management interface, and their data plane interface(s). Internet access for failover: Remember that an API call to AWS will still use an outbound request to the Internet. Make sure you allow the BIG-IP devices to make outbound connections over HTTPS to the Internet. If this is not available, you will find that your route tables are not updated at time of failover. (If you have a hard requirement that your devices should not have outbound access to internet, you can follow Arnulfo's guide linked to above and use VPC endpoints to keep this traffic on your local VPC) iApp logs: you can enable this in the iApp settings. I have used the logs (in /var/ltm/log) to troubleshoot issues that I have created myself. That’s how I learned not to accidentally cut off Internet access for your devices! Don’t forget about return routes if SNAT is disabled: Just like on-prem environments, if you disable SNAT, the pool member will need to route return traffic back to the BIG-IP device. You will commonly set up a default route (0.0.0.0/0) in AWS, point it at the ENI of the active BIG-IP device, and associate this route table with the subnet containing the pool members. If the pool members are in a remote VPC, you will need to create this route on the transit gw route table also. Don’t accidentally cut off internet access: When you configure the default route of 0.0.0.0/0 to point to eth1 of the BIG-IP device, don’t apply this route to every subnet in your Security VPC. It may be easy to do so accidentally, but remember that it could cause the API calls to update route tables to fail when the Standby device becomes Active. Don’t forget to disable source/dest check on your ENI’s. This is configured by the template, but if you have other devices that require it, remember to check this setting.7.9KViews5likes27CommentsEasy demo: Protect container traffic to EKS with BIG-IP
Intro EKS (Elastic Kubernetes Service) is popular! It allows you to deploy apps without managing the Kubernetes mgmt plane yourself. If you're planning to run an app using EKS, you will likely need to consider ingress security along with that app. Securing traffic into Kubernetes is a topic we love to talk about at F5. Why would you want a demo securing traffic into EKS? It's fully automated! Since most people are new to Kubernetes (k8s for short), I find myself in need of a quick demo that's totally open for anyone to run. This way after I demo, the audience can go back to run it on their own, decompose it for themselves, and take their own time to understand it all. Here's what we will automatically deploy: Prerequisites In short, you'll need an AWS demo account, a workstation with Terraform installed, your AWS creds configured, and the aws-iam-authenticator installed. Here's a VM with all of your prerequisites: If you don't have a workstation with Terraform installed, don't worry! You can easily set up the client workstation you need by deploying this demo workstation, which will have all the required tools. Then, configure your AWS IAM credentials. Side note: When building a cloud solution example, I try to stay as "vendor neutral" as practical and avoid using 3rd party tools when possible. For example, I have written a demo using AWS Lambda functions, instead of a 3rd party automation tool, for post-deployment configuration. This might be preferred if you do not (or cannot) use a particular 3rd party tool. So I originally started down the path of following this article to completely build an EKS demo via CFT. I planned to extend the quickstart demo to include an ingress controller and security policy, along with an app to protect. I guess I planned to use Lambda functions again. However, after talking about this with colleagues, the workarounds came to seem silly: nobody would do this in real life, so constraining myself to only using a CFT seemed unnecessary. Instead I chose to use the ubiquitous tool Terraform. This is easier to use and also saves me from requiring a user to have a few other tools, like kubectl, helm, and aws-iam-authenticator, etc. So, for better or worse, Terraform is a pre-req here. The workstation I've used is Ubuntu 18.04. Now, for the demo To deploy: You'll need Terraform set up and your AWS credentials configured. Optionally, deploy a demo workstation by following the ReadMe guide here. Configure your AWS credentials in a demo account. Remember, this demo will deploy AWS resources that will cost you money via your AWS bill. Don't forget to destroy all resources at the end of your demo. If you choose to skip this step, proceed with your own client workstation. But if you hit failures, I recommend using this demo workstation, since it is what I used when building this demo. Connect to your workstation and run these commands. When prompted for confirmation, type "yes" as instructed. git clone https://github.com/mikeoleary/aws-eks-bigip-terraform cd aws-eks-bigip-terraform/vpc terraform init terraform plan terraform apply You've deployed! You will need to wait around 20 mins for everything to deploy: a VPC, EKS, a BIG-IP, and then an application in EKS to be protected by BIG-IP. To see your finished product: Once your Terraform commands have completed, you should see output values. If not, type this and take a note of the output values: terraform output Now, open a web browser and visit http://<public_ip_app>. If you want to inspect the configuration of BIG-IP, visit https://<public_dns> and login with username admin and password <password>. If you want to see the pods, service, or configmap running in EKS, run: mkdir ~/.kube terraform output kubeconfig > ~/.kube/config kubectl describe po kubectl describe svc kubectl describe configmap Don't stop here! Remember, you'll need to follow the steps to destroy your AWS resources, otherwise you'll be charged for running these AWS resources. Coincidentally you can use Terraform to limit this kind of rogue AWS spend, so don't be this guy: Delete this demo To destroy this demo in full, type: terraform destroy What just happened? Let's review what just happened. First, Terraform deployed a VPC and subnets, using the AWS provider. Then, Terraform deployed some more resources in AWS: an EKS cluster an EC2 instance to be the k8s worker node a BIG-IP VM Then, Terraform deployed some resources to EKS using the Kubernetes provider: a k8s deployment using the f5devcentral/helloworld image a k8s service to expose this deployment a k8s secret, a service account, and a cluster role binding, all to be used later in the helm deployment a k8s configMap resource, which is read by CIS and populates a pool in a partition with IP addresses of pods in a given service. Then, Terraform deployed F5 Container Ingress Services to k8s using the helm provider: a helm chart that deploys F5 CIS using helm the helm deployment was configured so that F5 CIS will point at the BIG-IP mgmt address, and use the secret generated earlier. After all this, the components are all in place. We have an app in k8s that is exposed to ingress traffic, which is routed via the BIG-IP and automatically updated. Our app is protected! Call to Action To summarize: Please use this demo to teach yourself and show your colleagues: how critical it is to secure traffic into your container environment how you can automate this process by using the tools you wish, and F5's extensive API's for automation. Please leave me a note with feedback if you are able to use this demo. I would like it to be useful for you. Let me know how I can make it more useful, or feel free to contribute to the repo yourself. Thanks!1.8KViews5likes5CommentsUse AWS Lambda functions in a CFT
Problem statement: You have an existing VPC and you want to deploy Big-IP, and you want to add a default route to steer traffic to the ENI of the Big-IP after it's created. We can easily use F5’s supported templates to deploy Big-IP in AWS with a CloudFormation Template (CFT) into this VPC, but what if you want to add some automation after the Big-IP is deployed? For example, what if you wanted to add a default route to steer traffic to the ENI of the Big-IP that was just created? You might think of these options: You could create the RouteTable(s) as part of your CFT and associate it with the subnet(s) you deploy your Big-IP into. But what if the route already exists in your VPC? You can’t edit a route via a CFT - only create one. You could use a tool to automate your deployment, like Ansible or Jenkins. This is a fantastic demonstration of these tools. You could deploy Big-IP, then use a CFT outputto get the ENI idof the Big-IP interface, get the route table id, and make your update via the replace-route API call. But, you might be required to make your entire deployment in 1 CFT, or maybe you’re looking to ensure your CFT doesn’t rely on other tasks after deployment? So, what if you cannot use a 3rd party tool? You could write your automation into a Lambda function, then create and execute that Lambda function as part of your CFT. This gives you the ability to run tasks in AWS as part of your CFT deployment. This article explains option #3. Modify the automation for your other use cases. Solution: The steps to follow when you need Lambda functionality as part of your CFT are: Create an IAM role in your CFT for your Lambda function to execute. Create a Lambda function Create a Custom Resource that is backed by the Lambda function. This will execute your Lambda function with parameters you can specify, and report back a Success or Failure to CloudFormation. I’ll explain how to do each of these and provide an example CFT at the end. Create an IAM role Include code such as below to create an IAM role that is appropriate for your Lambda function. Do not provide more permission than required to this role. Very public breacheshave occurred as a result of mis-use of IAM roles, so by keeping your IAM roles limited in scope, you decrease your risk if it is mis-used. (In fact, I wrote some best practices to decrease this risk.) In our case, we’re allowing my Lambda function permissions over EC2 service. Create a Lambda function Here’s where you’ll need to define the data you’ll pass to your Lambda function (if any), what you want Lambda to do, and what it should return (if anything). Then write your function in a supported language. I’ve chosen Python. In this case, we’re updating a route in AWS, so we’re passing 3 things to the function: a CIDR range, an ENI id, and a RouteTable id. Upon creation, this Lambda function will update the specified route (CIDR) in the specified RouteTable, to point at the specified ENI. The Lambda function will also reference the IAM role we are creating. Create a Custom Resource. Custom Resources can be used to represent things we cannot define with AWS CFT types normally, like EC2 instances and VPC’s. A Lambda-backed Custom Resource is just one example of this. Our Custom Resource will pass an Event Type to the Lambda function (either Create, Update, or Delete), and then we’ll pass Resource Properties that will provide the function with the CIDR, ENI id, and RouteTable id required. In our case, the Lambda function returns the output of the API call upon Create, but returns nothing upon Delete. Still with me? String these together, and you have a Lambda function that is executed as part of your CFT. CloudFormation is smart enough to know that these depend upon values from other resources in the template, so they will be created after the EC2 instance is created. You could use a DependsOn statement to ensure this behavior, if you had other reasons to wait for something before Lambda execution. Demo Attached to this article is a ZIP file containing 2 CFT templates. First, deploy CFT #1, which is a file called "vpc.json". This will create a VPC that pre-exists your BigIP deployment. It is extremely simple: a single VPC, with a single subnet, which has an associated Route Table, with a default route of 0.0.0.0/0 pointing to an Internet Gateway. Take a note of the Outputs. There will be a RouteTableId and a SubnetId. You will need these Id's on the next step. Notice: the Route Table you created has a default route pointing toward an Internet Gateway. Then, deploy CFT #2, which is a file called "route-update.json". You will need to enter 2 parameters: the RouteTableId and SubnetId that were outputs from step #1. It will deploy an ENI (network interface) into the subnet, and then update the RouteTable that you have specified in your input parameters to point 0.0.0.0/0 at this ENI. Notice again: the Route Table's default route has been updated to point at the ENI, not the Internet Gateway. Obviously you'll want to modify these to point at the ENI of a Big-IP device, but I've kept these templates very simple to show you just the updating of a route via Lambda. You may even want to modify the code of the Lambda function to do other things - go for it! Enjoy. Use this article as a starting point, then think of cool use cases for Lambda functions. Prove them out and leave a comment to share your experience. And don’t forget - Your own role within the AWS account matters. You cannot create an IAM role that has permissions greater than you have yourself. (And if you’re deploying this with a service account using another IAM role, consider that) Credit goes to this article, which I used to learn about Lambda-backed Custom Resources and then modified for this use case. Demo Cloud Formation Template Files1.4KViews5likes0CommentsGetting the Most out of Amazon EC2 Autoscale Groups with F5 BIG-IP
An Introduction to Amazon EC2 Auto Scaling One of the core tenets of cloud computing is elasticity, or the ability to scale cloud resources as needed to meet organizational or user needs. Amazon EC2 Auto Scaling is a feature within AWS focused on facilitating the provisioning and deprovisioning of elastic cloud computing resources to meet user demand. This article will cover how F5 BIG-IP can embrace elasticity and leverage the various features of Auto Scaling to manage steps of the F5 BIG-IP lifecycle, such as provisioning, licensing, configuration management, and upgrades. The Architecture In a typical F5 BIG-IP architect leveraging Amazon EC2 Auto Scaling, there are two key components. The first key component is the Auto Scaling group itself that contains one to many F5 BIG-IP VE instances as part of a managed group. As the name suggests, the power of an Auto Scaling group is its ability to introduce and remove F5 BIG-IP VE instances in the group based on a specified number of required instances or configured monitoring threshold reflecting user demand. The second key component is the AWS NLB or Network Load Balancer. While limited in its features compared to an F5 BIG-IP, the NLB plays a critical role in distributing traffic evenly across the F5 BIG-IP VEs within the Auto Scaling group. The NLB also tracks F5 BIG-IP instances added and removed from the Auto Scaling group and load balances only to active instances within the group. Brought together, an Auto Scaling group containing F5 BIG-IPs and NLB brings the benefits of elasticity and scaling of F5 BIG-IP functionality by provisioning and deprovisioning BIG-IP instances to match organizational capacity needs. Additionally, this architecture has the potential benefit of simplifying maintenance of F5 BIG-IPs by enabling engineers to remove instances from service to perform tasks such as upgrades. While this architecture brings many new benefits to scaling and maintenance, it also introduces new unique challenges: How do you ensure a consistent configuration is applied when a new F5 BIG-IP is provisioned? If you are using BYOL licensing with BIG-IQ, how do you ensure licenses are being applied during provisioning and revoked during termination? How do you ensure the F5 BIG-IP you just provisioned is tested and fully functioning before going into service? These are all challenges where Lifecycle Hooks become our secret weapon. Meet Lifecycle Hooks A powerful feature contained within Amazon EC2 Auto Scaling is Lifecycle Hooks. Lifecycle Hooks are event-driven triggers executed as instances are added or removed from an Auto Scaling group. Lifecycle Hooks enable the ability to put EC2 instances in a wait state at various steps of the instance's lifecycle and execute external actions such as an AWS Lambda function. The power of Lifecycle Hooks in the context of F5 BIG-IP is it enables the ability to execute external AWS Lambda-driven code to perform management tasks such as applying a list of AS3 declarations from an S3 bucket at the time of provisioning or revoking a BYOL license at the time of termination. This feature simplifies the management of F5 BIG-IP Auto Scaling groups by ensuring any newly provisioned F5 BIG-IPs have the needed configurations to match other instances in the group. Additionally, this feature provides the benefit of immediately revoking F5 BIG-IP BYOL licenses when an instance is being deprovisioned, ensuring an organization is maximizing its F5 spend. In addition to F5 BIG-IP lifecycle management tasks, Lifecycle Hooks can perform initial testing of F5 BIG-IP instances before being placed inside the Auto Scaling group to accept new user traffic. These tests can include use cases such as ensuring a group of VIPs is correctly processing traffic or a WAF policy is blocking a known attack. If the F5 BIG-IP fails the test, the Lifecycle Hook can terminate the failing instance and spin up a new one until the test criteria are met. This workflow automatically reduces the risk of a failing F5 BIG-IP instance receiving user traffic and ensuring a standard level of quality control for F5 BIG-IP instances entering the Auto Scaling group. The power of Lifecycle Hooks combined with the flexibility AWS Lambda provides for an almost near endless number of possibilities as part of the F5 BIG-IP provisioning and deprovisioning lifecycle. Lifecycle hooks empower cloud engineers and F5 administrators to programmatically trigger event-driven code to perform repetitive management and testing tasks common for scaling F5 BIG-IP deployments. An example of an AWS Lambda function used to perform the lifecycle tasks mentioned above can be found here. Rolling Updates with Instance Refresh In addition to Lifecycle Hooks, Amazon EC2 Auto Scaling provides a feature to simplify the ability to upgrade F5 BIG-IPs called Instance Refresh. Instance Refresh enables the ability to incrementally replace one version of an F5 BIG-IP's machine image with another version of the image in the form of a rolling deployment. Integrated with Lifecycle Hooks, Instance Refresh can upgrade, configure, and replace instances running an older version of BIG-IP with a new version. Additionally, Instance Refresh by default integrates with NLB to gradually drain connections of the old F5 BIG-IP images before removal, making the service impact of performing an upgrade little to nonexistent. The combined benefit of Instance Refresh with Lifecycle Hooks is an automated upgrade process with the potential of minimal user impact. Speedier Scaling with Warm Pools The final helpful feature of Amazon EC2 Auto Scaling when managing F5 BIG-IPs is warm pools. A warm pool is a newly added feature within Auto Scaling that allows for the creation of pre-initialized F5 BIG-IPs that live in a stopped state until the additional capacity is needed. The benefit of using a warm pool in an F5 BIG-IP Auto Scale group is that it enables the ability to perform many of the time-consuming tasks performed when initializing a virtual appliance ahead of when the BIG-IP is needed. These tasks include licensing, module provisioning, and other onboarding tasks. When tested, the use of a warm pool on average nearly halved the amount of time to add a new BIG-IP into service. The cost of maintaining a warm pool is also relatively small compared to paying for an overprovisioned Auto Scaling group, as AWS does not charge for compute for stopped instances and only charges for storage. When examining the architectural considerations for using Auto Scaling F5 BIG-IP instance, warm pools enable the ability to quickly add new F5 BIG-IP instances in scenarios where user demand may be unpredictable. Conclusion and Next Steps In this article, we have covered the many ways Amazon EC2 Auto Scaling can be leveraged to improve the management of F5 BIG-IP instances inside AWS. For further information and a full Terraform example of how the concepts detailed in this article can be implemented, check out this GitHub repository: https://github.com/tylerhatton/f5-warm-pool-demo1.4KViews5likes0CommentsF5 High Availability - Public Cloud Guidance
This article will provide information about BIG-IP and NGINX high availability (HA) topics that should be considered when leveraging the public cloud. There are differences between on-prem and public cloud such as cloud provider L2 networking. These differences lead to challenges in how you address HA, failover time, peer setup, scaling options, and application state. Topics Covered: Discuss and Define HA Importance of Application Behavior and Traffic Sizing HA Capabilities of BIG-IP and NGINX Various HA Deployment Options (Active/Active, Active/Standby, auto scale) Example Customer Scenario What is High Availability? High availability can mean many things to different people. Depending on the application and traffic requirements, HA requires dual data paths, redundant storage, redundant power, and compute. It means the ability to survive a failure, maintenance windows should be seamless to user, and the user experience should never suffer...ever! Reference: https://en.wikipedia.org/wiki/High_availability So what should HA provide? Synchronization of configuration data to peers (ex. configs objects) Synchronization of application session state (ex. persistence records) Enable traffic to fail over to a peer Locally, allow clusters of devices to act and appear as one unit Globally, disburse traffic via DNS and routing Importance of Application Behavior and Traffic Sizing Let's look at a common use case... "gaming app, lots of persistent connections, client needs to hit same backend throughout entire game session" Session State The requirement of session state is common across applications using methods like HTTP cookies,F5 iRule persistence, JSessionID, IP affinity, or hash. The session type used by the application can help you decide what migration path is right for you. Is this an app more fitting for a lift-n-shift approach...Rehost? Can the app be redesigned to take advantage of all native IaaS and PaaS technologies...Refactor? Reference: 6 R's of a Cloud Migration Application session state allows user to have a consistent and reliable experience Auto scaling L7 proxies (BIG-IP or NGINX) keep track of session state BIG-IP can only mirror session state to next device in cluster NGINX can mirror state to all devices in cluster (via zone sync) Traffic Sizing The cloud provider does a great job with things like scaling, but there are still cloud provider limits that affect sizing and machine instance types to keep in mind. BIG-IP and NGINX are considered network virtual appliances (NVA). They carry quota limits like other cloud objects. Google GCP VPC Resource Limits Azure VM Flow Limits AWS Instance Types Unfortunately, not all limits are documented. Key metrics for L7 proxies are typically SSL stats, throughput, connection type, and connection count. Collecting these application and traffic metrics can help identify the correct instance type. We have a list of the F5 supported BIG-IP VE platforms on F5 CloudDocs. F5 Products and HA Capabilities BIG-IP HA Capabilities BIG-IP supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster L3/L4 connection sharing to next device in cluster (ex. avoids re-login) L5-L7 state sharing to next device in cluster (ex. IP persistence, SSL persistence, iRule UIE persistence) Reference: BIG-IP High Availability Docs NGINX HA Capabilities NGINX supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster Mirroring connections at L3/L4 not available Mirroring session state to ALL devices in cluster using Zone Synchronization Module (NGINX Plus R15) Reference: NGINX High Availability Docs HA Methods for BIG-IP In the following sections, I will illustrate 3 common deployment configurations for BIG-IP in public cloud. HA for BIG-IP Design #1 - Active/Standby via API HA for BIG-IP Design #2 - A/A or A/S via LB HA for BIG-IP Design #3 - Regional Failover (multi region) HA for BIG-IP Design #1 - Active/Standby via API (multi AZ) This failover method uses API calls to communicate with the cloud provider and move objects (IP address, routes, etc) during failover events. The F5 Cloud Failover Extension (CFE) for BIG-IP is used to declaratively configure the HA settings. Cloud provider load balancer is NOT required Fail over time can be SLOW! Only one device actively used (other device sits idle) Failover uses API calls to move cloud objects, times vary (see CFE Performance and Sizing) Key Findings: Google API failover times depend on number of forwarding rules Azure API slow to disassociate/associate IPs to NICs (remapping) Azure API fast when updating routes (UDR, user defined routes) AWS reliable with API regarding IP moves and routes Recommendations: This design with multi AZ is more preferred than single AZ Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost For Azure (based on my testing)... Recommend using Azure UDR versus IP failover when possible Look at Failover via LB example instead for Azure If API method required, look at DNS solutions to provide further redundancy HA for BIG-IP Design #2 - A/A or A/S via LB (multi AZ) Cloud LB health checks the BIG-IP for up/down status Faster failover times (depends on cloud LB health timers) Cloud LB allows A/A or A/S Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Use "failover via LB" if you require faster failover times For Google (based on my testing)... Recommend against "via LB" for IPSEC traffic (Google LB not supported) If load balancing IPSEC, then use "via API" or "via DNS" failover methods HA for BIG-IP Design #3 - Regional Failover via DNS (multi AZ, multi region) BIG-IP VE active/active in multiple regions Traffic disbursed to VEs by DNS/GSLB DNS/GSLB intelligent health checks for the VEs Key difference: Cloud LB is not required DNS logic required by clients Orchestration required to manage configs across each BIG-IP BIG-IP standalone devices (no DSC cluster limitations) Recommendations: Good for apps that handle DNS resolution well upon failover events Recommend when cloud LB cannot handle a particular protocol Recommend when customer is already using DNS to direct traffic Recommend for applications that have been refactored to handle session state outside of BIG-IP Recommend for customers with in-house skillset to orchestrate (Ansible, Terraform, etc) HA Methods for NGINX In the following sections, I will illustrate 2 common deployment configurations for NGINX in public cloud. HA for NGINX Design #1 - Active/Standby via API HA for NGINX Design #2 - Auto Scale Active/Active via LB HA for NGINX Design #1 - Active/Standby via API (multi AZ) NGINX Plus required Cloud provider load balancer is NOT required Only one device actively used (other device sits idle) Only available in AWS currently Recommendations: Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost Reference: Active-Passive HA for NGINX Plus on AWS HA for NGINX Design #2 - Auto Scale Active/Active via LB (multi AZ) NGINX Plus required Cloud LB health checks the NGINX Faster failover times Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Recommended for apps fitting a migration type of Replatform or Refactor Reference: Active-Active HA for NGINX Plus on AWS, Active-Active HA for NGINX Plus on Google Pros & Cons: Public Cloud Scaling Options Review this handy table to understand the high level pros and cons of each deployment method. Example Customer Scenario #1 As a means to make this topic a little more real, here isa common customer scenario that shows you the decisions that go into moving an application to the public cloud. Sometimes it's as easy as a lift-n-shift, other times you might need to do a little more work. In general, public cloud is not on-prem and things might need some tweaking. Hopefully this example will give you some pointers and guidance on your next app migration to the cloud. Current Setup: Gaming applications F5 Hardware BIG-IP VIRPIONs on-prem Two data centers for HA redundancy iRule heavy configuration (TLS encryption/decryption, payload inspections) Session Persistence = iRule Universal Persistence (UIE), and other methods Biggest app 15K SSL TPS 15Gbps throughput 2 million concurrent connections 300K HTTP req/sec (L7 with TLS) Requirements for Successful Cloud Migration: Support current traffic numbers Support future target traffic growth Must run in multiple geographic regions Maintain session state Must retain all iRules in use Recommended Design for Cloud Phase #1: Migration Type: Hybrid model, on-prem + cloud, and some Rehost Platform: BIG-IP Retaining iRules means BIG-IP is required Licensing: High Performance BIG-IP Unlocks additional CPU cores past 8 (up to 24) extra traffic and SSL processing Instance type: check F5 supported BIG-IP VE platforms for accelerated networking (10Gb+) HA method: Active/Standby and multi-region with DNS iRule Universal persistence only mirrors to only next device, keep cluster size to 2 scale horizontally via additional HA clusters and DNS clients pinned to a region via DNS (on-prem or public cloud) inside region, local proxy cluster shares state This example comes up in customer conversations often. Based on customer requirements, in-house skillset, current operational model, and time frames there is one option that is better than the rest. A second design phase lends itself to more of a Replatform or Refactor migration type. In that case, more options can be leveraged to take advantage of cloud-native features. For example, changing the application persistence type from iRule UIE to cookie would allow BIG-IP to avoid keeping track of state. Why? With cookies, the client keeps track of that session state. Client receives a cookie, passes the cookie to L7 proxy on successive requests, proxy checks cookie value, sends to backend pool member. The requirement for L7 proxy to share session state is now removed. Example Customer Scenario #2 Here is another customer scenario. This time the application is a full suite of multimedia content. In contrast to the first scenario, this one will illustrate the benefits of rearchitecting various components allowing greater flexibility when leveraging the cloud. You still must factor in-house skill set, project time frames, and other important business (and application) requirements when deciding on the best migration type. Current Setup: Multimedia (Gaming, Movie, TV, Music) Platform BIG-IP VIPRIONs using vCMP on-prem Two data centers for HA redundancy iRule heavy (Security, Traffic Manipulation, Performance) Biggest App: oAuth + Cassandra for token storage (entitlements) Requirements for Success Cloud Migration: Support current traffic numbers Elastic auto scale for seasonal growth (ex. holidays) VPC peering with partners (must also bypass Web Application Firewall) Must support current or similar traffic manipulating in data plane Compatibility with existing tooling used by Business Recommended Design for Cloud Phase #1: Migration Type: Repurchase, migration BIG-IP to NGINX Plus Platform: NGINX iRules converted to JS or LUA Licensing: NGINX Plus Modules: GeoIP, LUA, JavaScript HA method: N+1 Autoscaling via Native LB Active Health Checks This is a great example of a Repurchase in which application characteristics can allow the various teams to explore alternative cloud migration approaches. In this scenario, it describes a phase one migration of converting BIG-IP devices to NGINX Plus devices. This example assumes the BIG-IP configurations can be somewhat easily converted to NGINX Plus, and it also assumes there is available skillset and project time allocated to properly rearchitect the application where needed. Summary OK! Brains are expanding...hopefully? We learned about high availability and what that means for applications and user experience. We touched on the importance of application behavior and traffic sizing. Then we explored the various F5 products, how they handle HA, and HA designs. These recommendations are based on my own lab testing and interactions with customers. Every scenario will carry its own requirements, and all options should be carefully considered when leveraging the public cloud. Finally, we looked at a customer scenario, discussed requirements, and design proposal. Fun! Resources Read the following articles for more guidance specific to the various cloud providers. Advanced Topologies and More on Highly Available Services Lightboard Lessons - BIG-IP Deployments in Azure Google and BIG-IP Failing Faster in the Cloud BIG-IP VE on Public Cloud High-Availability Load Balancing with NGINX Plus on Google Cloud Platform Using AWS Quick Starts to Deploy NGINX Plus NGINX on Azure5.6KViews5likes2CommentsUsing F5 Application Security and DoS Solutions with AWS Global Accelerator Part 1
How do you secure applications that are not a good fit for a CDN? In Part 1 of the series we cover how using F5 security solutions combined with AWS Global Accelerator can solve for performance and security concerns from L3 to L7. In the overall series we will explore WAF, BOT, DOS vectors applying cloud concepts of elasticity, declarative interfaces and as code practices.2.5KViews4likes0CommentsHow does F5 AS3 really work under the hood?
Put it simple,AS3 is a way to configure a BIG-IP once a BIG-IP is already provisioned. Full stop! We can also use AS3 to maintain that configuration over time. The way it works is we as a client send a JSON declaration via REST API and AS3 engine is supposed to work out how to configure BIG-IP the way it's been declared. AS3 internal components (parser and auditor) are explained further ahead. For non-DEV audience, AS3 is simply the name we give to an intelligent listener which acts as an interpreter that reads our declaration and translate it to proper commands to be issued on the BIG-IP. AS3 engine may or may not reside on BIG-IP (more on that on section entitled "3 ways of using AS3"). Yes, AS3 is declared in a structured JSON file and there are many examples on how to configure your regular virtual server, profiles, pools, etc,on clouddocs. AS3 uses common REST methods to communicate such as GET, POST andDELETE under the hood. For example, when we send our AS3 declaration to BIG-IP, we're sending an HTTP POST with the AS3 JSON file attached. AS3 is part of the Automation Toolchain whichincludes Declarative Onboarding and Telemetry Steaming. What AS3 is NOT Not a Mechanism for Role-BasedAccess Control (RBAC) AS3 doesn't support RBAC in a way that you can allow one user to configure certain objects and another user to configure other objects. AS3 has to use admin username/password with full access to BIG-IP resources. Not a GUI There's currently no native GUI built on top of AS3. Not an orchestrator AS3 won't and doesn't work out how to connect to different BIG-IPs and automatically figure out which box it needs to send which configuration to. All it does is receive a declaration, forwards it on and configure BIG-IP. Not for converting BIG-IP configuration We can't currently use AS3 to pull BIG-IP configuration and generate an AS3 configuration but I hope this functionality should be available in the future. Not for licensing or other onboarding functions We can't use AS3 for doing things like configuring VLANs or NTP servers. We use AS3 to configure BIG-IP once it's been already initially provisioned. For BIG-IP's initial set up, we useDeclarative Onboarding. Why should we use AS3? To configure and maintain BIG-IPs across multiple versions using the same automated workflow. A simple JSON declaration becomes the source of Truth with AS3, where configuration edits should only be made to the declaration itself. If multiple BIG-IP boxes use the same configuration, a single AS3 declaration can be used to configure the entire fleet. It can also be easily integrated with external automation tools such as Ansible and Terraform. What I find really REALLY cool about AS3 AS3 targets and supports BIG-IP version 12.1 and higher. Say we have an AS3 declaration that was previously used to configure BIG-IP v12.1, right? Regardless if we're upgrading or moving config to another box, we can still use the same declaration to configure BIG-IP v15.1 box in the same way. I'm not joking! Back in the F5 Engineering Services days, I still remember when I used to grab support tickets where the issue was a configuration from an earlier version that was incompatible with newer version, e.g. a profile option was moved to a different profile, or new feature was added that requires some other option to be selected, etc. This is supposed to be a thing of the past with AS3. AS3Key Features Transactional If you're a DBA, you've certainly heard of the term ACIDIC (atomicity, consistency, isolation, and durability). Let's say we send an AS3 declaration with 5 objects. AS3 will either apply the entire declaration or not apply at all. What that means is that if there's one single error, AS3 will never apply part of the configuration and leave BIG-IP in an unknown/inconsistent state. There's no in-between state. Either everything gets configured or nothing at all. It's either PASS or FAIL. Idempotent Say we send a declaration where there's nothing to configure on BIG-IP. In that case, AS3 will come back to client and inform that there's nothing for it to do. Essentially, AS3 won't remove BIG-IP's entire config and then re-apply it. It is smart enough to determine what work it needs to do and it will always do as little work as possible. Bounded AS3 enforces multi-tenancy by default, i.e. AS3 only creates objects in partitions (known as "tenants" in AS3 jargon) other than /Common. If we look at theAS3 declaration examples, we can see that a tenant (partition) is specified before we declare our config. AS3 does not create objects in the /Common partition. The exception to that is /Common/Shared when objects are supposed to be shared among multiple partitions/tenants. An example is when we create a pool member and a node gets automatically created on BIG-IP. Such node is created on /Common/Shared partition because that node might be a pool member in another partition, for example. Nevertheless, AS3 scope is and must always be bounded. The 3 ways of using AS3 Using AS3 through BIG-IP In this case here, we install AS3 RPM on each BIG-IP. BIG-IP is the box that has the "AS3 listener" waiting for us to send our AS3 JSON config file. All we need to do is to download AS3's binary and install it locally. There's a step by step guide for babies (with screenshots)hereusing BIG-IP's GUI. There's also a way to do it using curl if you're a geek like mehere. Using AS3 through BIG-IQ In this case, we don't need to manually install AS3 RPM on each BIG-IP box like in previous step. BIG-IQ does it for us. BIG-IQ v6.1.0+ supports AS3 and we can directly send declarations through BIG-IQ. Apart from installing, BIG-IQ also upgrades AS3 in the target box (or boxes) if they're using an older version. Analytics and RBAC are also supported. Using AS3 through Docker container This is where AS3 is completely detached from BIG-IP. In the Docker container set up, AS3 engine resides within a Docker container decoupled from BIG-IP. Say your environment have Docker containers running, which is not something uncommon nowadays. We can installAS3 in a Docker containerand use that container as the entry-point to send our AS3 declaration to BIG-IP. Yes, we as Cluent send our AS3 JSON file to where the Docker container is running and as long as the Docker container can reach our BIG-IP, then it will connect and configure it. Notice that in this case our AS3 engine runs outside of BIG-IP so we don't have to install AS3 on our BIG-IP fleet here. Docker container communicates with BIG-IP using iControl REST sendingtmshcommands directly. AS3 Internal Components AS3 engine is comprised ofan AS3 parser and AS3 auditor: AS3 Parser This is the front-end part of AS3 that communicates with the client andis responsible for client's declaration validation. AS3 Auditor After receiving validated declaration from AS3 parser, AS3 auditor's job is to compare desired validated declaration with BIG-IP's current configuration. It then determines what needs to be added/removed and forwards the changes to BIG-IP. AS3 in Action The way it works is Client sends a declaration to AS3 parser and config validation process kicks in. If declaration is not valid, it throws an error back to client along with an error code. If valid, declaration is forwarded on to AS3 auditor which then compares declaration with current BIG-IP's config and determines what needs to change. Only the configuration changes are supposed to be sent to BIG-IP, not the whole config. AS3 auditor then converts AS3 declaration totmshcommands and send convertedtmshconfig to BIG-IP via iControl REST. BIG-IP then pushes the changes viatmshcommands and returns success/error to AS3 auditor. If changes are not successful, an error is returned all the way to the client. Otherwise, successful code is returned to client and changes are properly applied to BIG-IP. Here's the visual description of what I've just said: Debugging AS3 AS3 schema validation errors are returned in HTTP Response with a message pointing to the specific error: This includes typos in property names and so on. Logs on BIG-IP are stored on/var/log/restnoded/restnoded.logand by default only errors are logged. Log level can be changed through theControlsobject in AS3 declaration itself. AS3 vs Declarative Onboarding This is usually source of confusion so I'd like to clarify that a bit. AS3 is the way we configure BIG-IP once it's already up and running. Declarative Onboarding (DO) is for the initial configuration of BIG-IP, i.e. setting up licence, users, DNS, NTP and even provisioning modules. Just like AS3,DO is API-only so no GUI on top of it. We can also have AS3 and DO in the same BIG-IP, so that's not a problem at all. Currently, there's no option to run it in a container like AS3 so as far as I'm concerned, it's only RPM based. Resources AS3 CloudDocs GitHub Repo Releases Declarative Onboarding (DO) CloudDocs GitHub Repo Releases I'd like to thank F5 Software Engineers Steven Chadwick and Garrett Dieckmann from AS3 team for providing a brilliant reference material.2.1KViews4likes2CommentsUsing F5 Application Security and DOS Solutions with AWS Global Accelerator - Part 2 Building a Lab
Have you ever wanted to test AWS Global Accelerator with F5 Security solutions? Do you want to know how to protect applicaitons that cannot use a CDN but required an optimized experience? We will do that and more in this blog series. In Part 2 we will build our lab by deploying our VPC, BIG-IPS, Global Accelerator and an example application.2.3KViews3likes0Comments