GWLB
3 TopicsIngress/Egress VPC inspection with BIG-IP and GWLB
Introduction The previous article in this series reviewed the BIG-IP and AWS Gateway Load Balancer (GWLB) integration, in this article we will focus on a deployment pattern that is used to inspect traffic in and out of a VPC using BIG-IP security services and GWLB. Baseline: In this scenario we will focus on a single existing VPC with EC2 instances and no BIG-IP security services Goal: Inspect traffic in and out of the VPC, scale the BIG-IP deployment as needed Deployment pattern: Considering the requirements and tools available, the deployment pattern will use the following attributes: Separate the security devices (BIG-IP's) from the workloads they are protecting. The BIG-IP's will be deployed in their own VPC – the Security VPC. We will reference the workload VPC as the 'Consumer' VPC as it consumes security services. Use Routing tables in the 'consumer' VPC to send all traffic to the security VPC for inspection. Leverage transparent security services on the BIG-IP for inspection (BIG-IP security features configuration is out of scope for this article) We can dive into each of the individual tasks: The Security VPC Provisioning the 'consumer' VPC network to send traffic through the security VPC Security VPC: Here, we are deploying the BIG-IP fleet and exposing it using GWLB. Some of the considerations when creating this VPC: Deploy in the same region as the 'consumer' VPC Design based on your availability requirements, Number of availability zones(AZ) How many BIG-IP's in each AZ These are the actions we need to take in the provider VPC to inspect all ingress/egress traffic: Deploy a group of BIG-IP's Create a GWLB target group and associate the BIG-IP's to it Create a GWLB and assign the previously created target group to the listener Create a GWLB endpoint service and associate it with the GWLB Configure the BIG-IP's to receive traffic over the tunnel and inspect according to the desired policy Diagram: The security VPC - BIG-IP fleet behind a GWLB, exposed using GWLB service Consumer VPC: In the consumer VPC, the BIG-IP group is abstracted by the GWLB and consumesthe security services from the provider VPC viaa new component:GWLB endpoint. This endpoint acts as bridge between the consumer VPC and the provider VPC. It essentially creates an ENI in one of the consumer's VPC subnet.Please note that a single endpoint belongs to a single availability zone and design accordingly. Inspecting all ingress traffic requires the use of 'Ingress routing' – an AWS feature that allows sending all ingress traffic from the internet gateway to an ENI or to a GWLB endpoint. Here are the actions we need to take in the consumer VPC to inspect all ingress/egress traffic: Create GWLB endpoints in each relevant availability zone that are attached to the 'GWLB endpoint service' from the provider VPC Change the ingress routing table so that all traffic in each AZ will get routed to the respective GWLB endpoint. Change the subnets routing tables so that all traffic in each AZ will get routed to the respective GWLB endpoint. Diagram: Inspecting all ingress/egress in the Security provider VPC Traffic flow Ingress traffic flow between an external user and an EC2 instance in the consumer VPC: Egress traffic flow between an EC2 instance in the consumer VPC and an external user: Summary: With this deployment you can protect your AWS VPC using the robust security services offered by the BIG-IP platform and get the following benefits: Scalability - Deploy as many BIG-IP instances as you need based on performance and availability requirements Transparent inspection - Inspecting the traffic without any address translation Optimized compute - all BIG-IP devices are active and processing traffic Next steps: Test the deployment yourself - Check out our self-service lab that you can deploy in your own AWS account(Fully automated deployment using Terraform): https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-fw1.6KViews2likes1CommentBIG-IP integration with AWS Gateway Load Balancer - Overview
Introduction With the release of TMOS version 16.1,BIG-IP now supportsAWS Gateway Load Balancer(GWLB). With this integration we are making it much easier and simpler to insert BIG-IP security services into an AWS environment while maintaining high availability and supporting elastic scalability of the BIG-IP's. When to use GWLB? F5 BIG-IP delivers a wide range of application and security services. Depending on the service and other requirements the BIG-IP'sare typically deployed in one of two modes:Networkmode orProxymode. Important: GWLB is only applicabletoday with Network deployment mode. First, you should identifywhich deployment mode is relevant for you: Network(GWLB Supported) Common use cases: Network firewall, DDoS protection, Transparent WAF Flowtransparency is maintained (no source or destination NAT) Directing traffic to the BIG-IP's is done using routing by the network, making sure traffic goes back to the same BIG-IP in order to maintain traffic symmetry is also based on routing Proxy(NotGWLB Supported) Providingingress services to the application (WAF,LB, L7 DDoS, Bot protection), services are applied to an application specific virtual server on the BIG-IP The BIG-IP SNAT (source NAT) managesthe traffic to ensure that the return traffic arrives at the same BIG-IP thus maintaining traffic symmetry Directing user'straffic to the BIG-IP's is usually done using DNS. AFQDN will map to a specific virtual server on the BIG-IP. Important:GWLB doesnotsupport proxy devices in theinspection zone. Traffic flow must remain unchanged by the inspection device, for more details see:https://aws.amazon.com/blogs/networking-and-content-delivery/integrate-your-custom-logic-or-appliance-with-aws-gateway-load-balancer/). Existing challenges of BIG-IP Network deployment without GWLB Let's examine two scenarios in which we use the BIG-IP for inspecting network traffic: Ingress/Egress protection:In this scenario, we want to inspect all traffic coming into and going out of theAWS network. The security services that are most relevant for this scenario are: Firewall, DDOS protection, WAF andIPS. East west (Inter VPC / networks) protection: In this scenario we want to inspectalltraffic between VPC's and between on-prem to VPC's. Themost common security services used areFirewall and IPS. In the two scenarios mentioned above, we need to ensure the relevant flows are routed to the BIG-IP. We will also need to verify that traffic is symmetric – meaning any flow that was sent to the BIG-IP must also return fromthe same BIG-IP. Today, without GWLB,our methodof getting traffic to the BIG-IP is tomanipulate the routing tables accordingly. AWS's routing tables accepts an ENI (Elastic Network Interface, a Network interface of an EC2 instance running a BIG-IP) as the 'gateway' for a specific route. Since routing tables only accepts asingleENI, we can only send traffic to a single BIG-IP instance -which of course creates a single point of failure. We can further improve this design by leveraging F5's Cloud Failover Extension (CFE) which allows us to create an active/standby pair. TheCFE will automatically manipulate the route tables to always point traffic at the ENI of the active device. You can read more on this design here:https://clouddocs.f5.com/products/extensions/f5-cloud-failover/latest/userguide/aws.html?highlight=route In summary, the network deployment of the BIG-IP in AWS has the following challenges: Limited scale - Throughput, concurrent connections etc. as only a single device processes the traffic Unoptimized compute – Passive device sits idle Mixed admin domains – BIG-IP's are deployed in the same VPC as the workloads it is protecting, routing table changes are controlled by F5's CFE which is not always wanted or possible How GWLB works with BIG-IP To understand how AWS GWLB works, let'sstart by taking a closer look at the GWLB. We can break down the GWLB functionality into two main elements: On the frontend:GWLB presents itself as a next-hop L3 Gateway to manage traffic transparently. GWLB uses a new component:GWLB endpoint. This endpoint acts as an ENI (Elastic Network Interface) of the GWLB. The only way of sending traffic to GWLB is through this endpoint, therefore it is deployed in a VPC thatconsumesthe security services (i.e. theConsumer VPC). Admin domain separation:The GWLB and BIG-IP fleet are deployed in their own VPC (Security VPC). One GWLB can receive and inspect traffic from many VPC's (findmore detailon design patterns in the 'Deployments' section) On the backend:GWLB is like a L4 Load Balancer, very similar to NLB (Network Load Balancer) Provides traffic distribution across a fleet of BIG-IP instances. Routes flows and assignsthem to specific BIG-IP for stateful inspection. Performs health-checks of the BIG-IP's, and routes traffic to healthy BIG-IP's only. Traffic is sent over a L3 tunnel using the GENEVE protocol (Requires BIG-IP version 16.1 and above) BIG-IP integration details: Requirements: BIG-IP version 16.1 and above BIG-IP traffic flow: The BIG-IP must bedeployed inthe same VPC as the GWLB. BIG-IP receives traffic from GWLB over a GENEVE tunnel that uses the BIG-IPs private ip address. Next, we create a virtual server that is enabled on the new tunnel. The new tunnel gets the traffic from GWLB, decapsulates it and then it gets processed by the inspection virtual server. After the virtual server processes the packet and applies the definedsecurity profiles, it needs to forward the packet back to the tunnel. The way to 'Force' traffic back into the tunnel is to create a fake node that 'lives' inside the tunnel and use it as the target pool of the virtual server. Once the tunnel interface receives the packets from the virtual server, it encapsulates them with GENEVE and forwards it back to GWLB. The following diagram describes the traffic flow within a specific BIG-IP device: Considerations and known issues: Traffic flows inside the 'inspection zone' must remain unchanged. i.e, no change in the packet's 5 tuple is allowed. All virtual servers are supported as long as the 5 tuple of the packet is maintained. That might require changing some configurations, for example Disable the default 'address translation' feature of a 'standard' virtual server. Set the 'source port' setting to 'Preserve strict' Payload manipulation is supported as long as the packet's 5 tuple is maintained Some examples: HTTP rewrites AWAF blocking responses – AWAF responds with a 'blocking page' High Availability When a BIG-IP instance fails in the target group, GWLB continue to sendexistingflows to the failed instance. Onlynew connectionswill be directed to a new instance. Deploy BIG-IP instances across availability zones according to the desired AZ availability requirements. The Default behavior for GWLB is to keep traffic in the same AZ, meaning traffic that was received on an endpoint in AZ1 will only forward traffic to instances in AZ1. That behavior is controlled by the 'CrossZoneLoadBalancing' flaghttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html If you do change the 'CrossZoneLoadBalancing' flag, keep in mind that GWLB will distribute traffic across AZ's even when all instances are healthy andwill incur cross-AZtraffic costs. Flow timeout GWLB has its own protocol timeouts (350 seconds for TCP, 120 for UDP). the timeout values on the BIG-IP must be smaller then the ones on GWLB - BIG-IP uses 300 seconds for TCP and 60 for UDP by default. With the GWLB integration you can now create an elastic group of BIG-IP's that will be exposed to the rest of the AWS network using AWS native networking constructs. GWLB is responsible for distributing the traffic evenly between the group while maintaining flow affinity so that each flow will always be routed to the same BIG-IP in the group and thus solving the symmetric flows without requiring anySNAT. The security inspection zone can be deployed in its own VPC, allowing better administrative separation. Exposing the inspection service is done using VPC endpoints, those endpoints are tied to the GWLB and can be used as the target of different routing tables (TGW, Ingress, subnets). Summary The BIG-IP andGWLB integration enables the industryleading BIG-IPsecurity serviceswith the following benefits: Elastic scalability:Scale your deployment based on your actual usage. Horizontally scale your BIG-IP's. Simplified connectivity:Leverage AWS native networking constructs to 'insert' BIG-IP security services in different traffic flows (North-South, East-West) Cost effectiveness:Consolidate BIG-IP's deployments and size the deploymentbased on actual usage Next steps: Check out the next articles in this series - Ingress/Egress VPC inspection with BIG-IP and GWLB deployment pattern Ingress/Egress and inter VPC inspection with BIG-IP and GWLB deployment pattern Test the integration yourself - Check out our self-service lab that you can deploy in your own AWS account(Fully automated deployment using Terraform): https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-fw https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-inter-vpc-fw-gwlb10KViews6likes2CommentsIngress/Egress and inter VPC inspection with BIG-IP and GWLB deployment pattern
Introduction The previous article in this series provided an overview of the BIG-IP and Gateway Load Balancer integration, this article covers a deployment pattern for inspecting various traffic flows between VPC's by leveraging this integration and AWS Transit Gateway Baseline: The following diagram represents the network we would like to protect, this is a common pattern that creates a centralized ingress/egress point for all of the VPC's through the 'internet VPC', each 'spoke VPC' connects to a Transit Gateway (TGW) which provides connectivity to the other spoke VPC's and to the internet VPC. With this baseline state each spoke VPC can communicate with all other VPC's without any inspection through the transit gateway. Internet traffic is not inspected as well Goal: Inspect all traffic between the VPC's and internet traffic using BIG-IP security services. Support dynamic scale of the inspection service. Deployment pattern: With the baseline and goal in mind, we will leverage GWLB to insert BIG-IP security services into the relevant traffic flows. we will leverage GWLB's ability to separate the security devices (BIG-IP's) from the workloads they are protecting. The BIG-IP's will be deployed in their own VPC – the Security VPC. Next, we can dive into each of the individual tasks: The Security VPC Provisioning the network to send traffic through the security VPC The security VPC The BIG-IP fleet is deployed in a dedicated security VPC. GWLB is used to distribute the traffic to the BIG-IP target group, the GWLB is exposed using the 'GWLB endpoint service'. As this VPC will be attached to the TGW, we need to create TGW attachments. The TWG attachments are created in their own subnets to gain more control over routing. Next, we create GWLB endpoints in multiple availability zones inside the security VPC. Finally, the routing tables are adjusted to send all traffic from the TGW attachment subnets (traffic that needs to be inspected) to the GWLB endpoint in the respective AZ. Traffic that was inspected is routed back to the TGW (using the GWLB endpoint routing table). These are the actions we need to take in the provider VPC to work with TGW: Deploy a group of BIG-IP's Create a GWLB target group and associate the BIG-IP's to it Create a GWLB and assign the previously created target group to the listener Create a GWLB endpoint service and associate it with the GWLB Configure the BIG-IP's to receive traffic over the tunnel and inspect according to the desired policy Create TGW attachments to dedicated subnets Create GWLB endpoints and attach them to the GWLB service Configure the routing tables to send all traffic from the TGW attachments to the GWLB endpoints, and from the GWLB to the TGW attachments (Traffic from the security VPC to the TGW will use a dedicated TGW routing table) On the TGW, the security VPC will use its own routing table that allows it to reach anywhere in the network – 'Security VPC Rtb', this routing table will receive traffic coming back from the security VPC after inspection which means that we can send it directly to the destination. Diagram:Security VPC with TGW attachments Provisioning the network to send traffic through the security VPC Now that we have our security services connected to the TGW, we need to provision the network to send all the relevant traffic flows to the security VPC. Controlling the traffic flows is done using the following components: TGW routing table - TGW supports different routing tables for each attachment Ingress routing table - Controlling traffic entering the Internet VPC from the internet through the internet gateway Subnet routing table - Each subnet may have its own routing table, we are using this capability to control traffic Since we are already sending all the traffic from the spoke VPC's to TGW, there is no need to change any of the routes inside the spoke VPC's. We only need to change the TGW routing table for the spoke VPC's attachments. We will create a new routing table- 'TGW ingress Rtb' that will point all traffic to the security VPC. Next, we will change the 'ingress' routing table of the internet VPC so that it sends all traffic to the security VPC for inspection. Diagram:Ingress/Egress and inter-VPC inspection using transit gateway and GWLB With all of the routing tables and components involved, it helps to look at a visual diagram of the traffic flows, Here are some diagrams that describes the different traffic flows and how they get inspected. Keep in mind that routing is not stateful, meaning it is our responsibility to ensure symmetric traffic flows. Diagram: Spoke10 to Spoke20 logical traffic flow Diagram: External user to Spoke 10 web service Diagram: Spoke10 to the internet Summary This deployment pattern leverages BIG-IP security services and the GWLB integration to secure an AWS environment while offering the following benefits: Scalability - BIG-IP's are deployed as a dynamic group that can scale to meet any performance needs. Operational efficiency - All BIG-IP instances are active and processing traffic Separate the security devices from the workload VPC Next steps: Check out our AWS documentation: https://clouddocs.f5.com/cloud/public/v1/aws_index.html Test the integration yourself - Use a self-service lab that you can deploy in your own AWS account(Fully automated deployment using Terraform): https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-inter-vpc-fw-gwlb1.3KViews1like0Comments