Secure Your New AWS Application with an F5 Web Application Firewall: Part 2 of 4
In Part 1 of our series, we used a CloudFormation Template (CFT) to create a repeatable deployment of our application in AWS. Our app is running in the cloud, our users are connecting to it, and we’re serving traffic. But more importantly, we’re selling our products.
However, after a bad experience with our application falling down, we realized the hard way that it's not secure anymore.
The challenge
Our app in the cloud is getting hacked
The solution
Add a scalable web application firewall (WAF)
In the data center, we had edge security measures that protected our application. In the cloud, we no longer have this.
We’re now vulnerable to attacks. This doesn’t mean that Amazon is not a secure cloud environment; it means that we need to secure our application and its data in the cloud.
With a little research, we found that Amazon has a shared responsibility model for security. They take responsibility “of” the cloud, and as an organization hosting an application in AWS, we’re responsible for security “in” the cloud. For more information, see https://aws.amazon.com/compliance/shared-responsibility-model/.
In our last article, we showed a fairly simple setup in AWS. Now, to secure our application, we’re going to add a BIG-IP VE web application firewall (WAF) cluster. Not only will this secure our application, but it takes advantage of AWS Auto Scaling, adding more BIG-IP VE instances when traffic or CPU load requires it.
To create and configure this Auto Scaling WAF, F5 provides a CloudFormation template.
This template and others are available on GitHub.
This CFT assumes that you already have a VPC with multiple subnets, each in a different availability zone. If you ran our CFT from last week, you should have this already.
You must also create a classic AWS ELB that will go in front of the BIG-IP VE instances. This ELB should listen on port 80 and have a health check for TCP port 8443, like this:
The ELB should also have a security group associated with it. This group should have the following Inbound ports open: 22 (for SSH access to BIG-IP VE), 8443 (for the BIG-IP VE Configuration utility), and 80 (for the web app).
Before you deploy the template, gather this information:
- The AWS ELB name (the one that will go in front of the BIG-IP VEs), for example, BIGIPELB.
- The VPC, subnet, and security group names/IDs
- The DNS name for the ELB in front of the app servers, for example: Test-StackELB-55UMG84080MI-342616460.us-east-2.elb.amazonaws.com.
When you deploy the template, an auto scaling group, launch configuration, and BIG-IP VE instance are created.
You can connect to the website by using the BIG-IP ELB address, for example: http://bigipelb-1631946395.us-east-2.elb.amazonaws.com/. The ID of the server you're connected to is displayed on the top menu bar.
If you want to access BIG-IP VE, you can use SSH to connect to the instance. Then you can set the admin password (tmsh modify auth password admin), and connect to the BIG-IP VE Configuration utility (https://PublicIP:8443).
The BIG-IP VE instances that make up the WAF cluster are licensed hourly, and they automatically license themselves when they are launched. They come in different throughput limits. We’re testing right now, so we’re going to start with a 25 Mbps image on a small AWS instance type (2 vCPU, 4 G memory). Later, when we go to production, we can update the throughput and AWS instance type.
Maintenance of the WAF Cluster
The challenge
Over time, the WAF cluster needs updates
The solution
Update the CloudFormation stack without bringing down the cluster
You’ve got your Auto Scaling WAF up and running and it’s sending notifications about traffic that it’s analyzing.
When we created this deployment, we specified 25 Mbps as the throughput limit for our BIG-IP VE instances. But now we’re selling millions of packets of our hotdog-flavored lemonade and it’s time to add some resources.
The good news is that you can simply re-run the CloudFormation stack and update the settings. New instances will be launched and old instances will be terminated. Traffic will continue to be processed during this time. To ensure the new BIG-IP VE instances have the same configuration as the ones you’re terminating, you must save off the BIG-IP VE configuration before you re-deploy.
IS THIS REALLY POSSIBLE?
Yes. This is possible. The WAF keeps running, customers keep buying, and the lemonade packets are flying out the door.
For example, let’s say we want to increase BIG-IP VE throughput, the # of BIG-IP VE instances, and the AWS instance type.
To do this, we:
- Back up the BIG-IP VE to a .ucs file
- Save the .ucs file to the S3 bucket that was created when we deployed the CFT
- Re-deploy the CFT and choose different settings.
For more information, watch this video that shows how it works.
Over time you can also:
- Upgrade to newer versions
- Apply hotfixes, security fixes, etc.
This same process applies; you can update your running configuration without losing your changes.
The bummer is, if you're a developer, you may not want to manage and maintain a WAF. Never fear, you have options! Part 3 will address this issue.