Shed the Responsibility of WAF Management with F5 and Cloud Interconnect: Part 3 of 4
Devs are trying to manage BIG-IP VEs and don’t want to
Move to a colo facility and use BIG-IP and Cloud Interconnect
In the last post, we talked about keeping the BIG-IP VE auto-scaled WAF up-to-date. This is the configuration we’re currently running:
The issue with this configuration is that the maintenance of BIG-IP VEs in AWS is falling on the development team. This, of course, is problematic for a few reasons:
- The dev team wants to spend their time developing
- The network engineers have lost control over the security and stability of the environment, and want it back
Another issue is that there is latency in the VPN and it’s relatively unstable. This causes issues keeping our inventory up-to-date, which in turn causes loss of sales, as shoppers go to other sites for inferior lemonade flavors.
Enter F5 and Cloud Interconnect. If you haven’t heard of it already, the basic idea is that your BIG-IPs can live in the same data centers as the cloud servers that your applications run on. Interconnect refers to a vendor-neutral facility where cloud and service providers meet.
If we take our existing setup and move it to a colocation data center, it would look something like this:
You can think of Cloud Interconnect as a simple Layer 2 switch that provides L2 connectivity between your cage and the various virtual networks you carved out in the cloud (which we’ll call "cloud provider networks" from now on).
Now we can use BGP (Border Gateway Protocol) to route traffic between the on-premise inventory database and our application on the cloud provider network.
Because it's using Layer 2 connectivity and routing protocols, traffic is not exposed to the internet and it's secure and protected.
In addition, we now have WAN connectivity between our on-premise data center and the colo location.
- The WAN is far more stable than the VPN, ensuring good synchronization between the database and our app.
- The WAN connection allows direct connectivity to the cloud provider networks, allowing for things like direct RDP and other SSH client access.
With Cloud Interconnect, you get:
- Low latency: Fiber optic links are directly connected to the cloud virtual network
- High security: Nothing travelling over the Internet, just as it was when it was in the data center
- Performance SLAs: The colo provider guarantees network performance
- Scale and choice: Multiple cloud providers are in the colo location
With BIG-IP and Cloud Interconnect, you get all of that and more, including:
- Single Sign-on
- Layer 7 traffic manipulation
- Web Application Firewall
- Data leakage prevention
- Intrusion prevention
And the administration of BIG-IP now rests with the team that knows BIG-IP best—NetOps/SecOps/IT.
Get Started with Cloud Interconnect
For more detailed information about Cloud Interconnect, see: https://f5.com/products/cloud-computing/cloud-interconnection
For details on how to configure BIG-IP in a Cloud Interconnect environment, see:
And if you want to work with Equinix specifically, you can follow this guide: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-cloud-interconnect-equinix-implementation-12-1-0.html
This all sounds great, right? Developers are relieved at the prospect of no longer messing with the WAF. NetOps/SecOps is happy that they get the central point of control that they want and expect.
The downside to this solution is that you’re almost back to where you started. As a developer, whenever you spin up new environments, you have to contact IT to update the BIG-IPs. You don’t want to do this every time you have a change. The speed and agility you were promised by the cloud are somewhat slowed by this new setup.
All is not lost! Come back for part 4 in this series to hear about another rockin F5 solution.
And in case you missed it, here are the previous posts: