f5 distributed cloud network connect
2 TopicsF5 Distributed Cloud and AWS VPC Lattice
F5 Distributed Cloud makes it easy to connect your AWS, hybrid and multi-cloud environments globally and easy incorporate innovative technologies like AWS VPC Lattice. Our solution combines network-centric and application-centric approaches, creating agility in your organization and enhancing your security posture. This approach allows you to focus on service networking or network routing to solve a wide range of challenges encountered in the MCN domain. Integrating new features like VPC Lattice into your complex enterprise IT estate is easy with F5. Our solution lets you stretch VPC Lattice across different regions and external environments. In this article we will walk through a network-centric approach and application-centric approach and show you how we can stretch a VPC Lattice anywhere you want it to go. Example Topology In the example topology we start with deployments in EU-NORTH-1, US-WEST-2 and US-EAST-1 where we have deployed applications. These VPCs have unique IP CIDR blocks and can connect them with a network-centric solution using F5 Distributed Cloud Network Connect. This is a straightforward task where we create a global networkobject. Thinkof this as a container to attach our VPCs. Next we will configure a Network Connector which creates the policy between the sites to be connected. Next we configure the sites to join the global network and update the interior routing of the sites towards the F5 CE nodes. For example I have added my EU-NORTH-1 site to enable connectivity to and from it's inside network. Everything is “simple.” You deploy F5 Distributed Cloud CEs to the different sites, and CE nodes will establish IPSEC/SSL tunnels to the F5 Regional Edges and to each other, creating a network topology, with traffic flowing on an encrypted global network. The traffic may flow directly between sites if you have leveraged a site mesh group, or the traffic may traverse the F5 global network if the mesh is not established. In the image below, you can see that I have 4 sites configured. Some of the sites of tunnels directly between them and others will traverse an F5 Region Edge. If we test our network layer connectivity, we will be able to connect end-to-end as long as each site has the internal routing pointing to the CE nodes. Region EU-NORTH-1: 10.200.0.0/16 US-EAST-1: 10.0.0.0/16 US-WEST-2: 10.100.0.0/16, 10.5.0.0/16 and 10.6.0.0/16 EU-NORTH-1 N/A [ec2-user@ip-10-0-25-46 ~]$ ping 10.5.129.216 PING 10.5.129.216 (10.5.129.216) 56(84) bytes of data. 64 bytes from 10.5.129.216: icmp_seq=1 ttl=123 time=76.7 ms 64 bytes from 10.5.129.216: icmp_seq=2 ttl=124 time=96.3 ms 64 bytes from 10.5.129.216: icmp_seq=3 ttl=124 time=107 ms [ec2-user@ip-10-5-129-216 ~]$ ping 10.200.210.196 PING 10.200.210.196 (10.200.210.196) 56(84) bytes of data. 64 bytes from 10.200.210.196: icmp_seq=1 ttl=125 time=164 ms 64 bytes from 10.200.210.196: icmp_seq=2 ttl=125 time=161 ms 64 bytes from 10.200.210.196: icmp_seq=3 ttl=125 time=162 ms US-EAST-1 [ec2-user@ip-10-200-210-196 ~]$ ping 10.0.25.46 PING 10.0.25.46 (10.0.25.46) 56(84) bytes of data. 64 bytes from 10.0.25.46: icmp_seq=1 ttl=125 time=116 ms 64 bytes from 10.0.25.46: icmp_seq=2 ttl=126 time=118 ms 64 bytes from 10.0.25.46: icmp_seq=3 ttl=126 time=116 ms N/A [ec2-user@ip-10-5-129-216 ~]$ ping 10.0.25.46 PING 10.0.25.46 (10.0.25.46) 56(84) bytes of data. 64 bytes from 10.0.25.46: icmp_seq=21 ttl=125 time=105 ms 64 bytes from 10.0.25.46: icmp_seq=22 ttl=125 time=91.1 ms 64 bytes from 10.0.25.46: icmp_seq=23 ttl=125 time=88.1 ms US-WEST-2 [ec2-user@ip-10-200-210-196 ~]$ ping 10.5.129.216 PING 10.5.129.216 (10.5.129.216) 56(84) bytes of data. 64 bytes from 10.5.129.216: icmp_seq=1 ttl=124 time=169 ms 64 bytes from 10.5.129.216: icmp_seq=2 ttl=124 time=162 ms 64 bytes from 10.5.129.216: icmp_seq=3 ttl=124 time=161 ms [ec2-user@ip-10-0-25-46 ~]$ ping 10.200.210.196 PING 10.200.210.196 (10.200.210.196) 56(84) bytes of data. 64 bytes from 10.200.210.196: icmp_seq=1 ttl=125 time=111 ms 64 bytes from 10.200.210.196: icmp_seq=2 ttl=126 time=110 ms 64 bytes from 10.200.210.196: icmp_seq=3 ttl=126 time=111 ms N/A Being able to build a network across clouds is critical in modern IT. There is an old saying, "If it PINGs it is not the network". While PING does provide connectivity, it does not prove that we have a solution that will allow the organization's agility to grow while minimizing complexity (overlapping IPs due to default VPCs ranges, mergers, acquisitions, or simple scale of deployment) and you may have dozens or hundreds of VPCs and accounts that need to be connected in each region, not to mention the sites located outside of AWS. Let's explore why we will need to move up the stack to stretch VPC Lattice. Everything Evolves The next project requires us to connect a system in EU-NORTH-1 to an application located in US-EAST-1 environment. We head down the same path of “let’s just route it to global network” During the project phase a new challenge is uncovered; one of the applications or APIs that we need to connect to is presented in US-EAST-1 viaVPC Lattice. VPC Lattice was deployed to simplify VPC and account connectivity in the region, and it has created a new challenge because Lattice runs in link local address space. DNS lookup of the service in US-EAST-1 US-EAST-1 [ec2-user@ip-10-0-15-28 ~]$ nslookup hparr-x-svcs.us-east-1.on.aws Server: 10.0.0.2 Address: 10.0.0.2#53 Non-authoritative answer: Name: hparr-x-svcs.us-east-1.on.aws Address: 169.254.171.32 Name: hparr-x-svcs.us-east-1.on.aws Address: fd00:ec2:80::a9fe:ab20 We cannot just route the 169.254.0.0/16 across our global network. If we refer to RFC 3927: A sensible default for applications which are sending from an IPv4 Link-Local address is to explicitly set the IPv4 TTL to 1. This is not appropriate in allcases,as some applications may require that the IPv4 TTL be set to other values. An IPv4 packet whose source and/or destination address is in the 169.254/16 prefix MUST NOT be sent to any router for forwarding, and any network device receiving such a packet MUST NOT forward it, regardless of the TTL in the IPv4 header. Similarly, a router or other host MUST NOT indiscriminately answer all ARP Requests for addresses in the 169.254/16 prefix. A router may of course answer ARP Requests for one or more IPv4 Link-Local address(es) that it has legitimately claimed for its ownuse,according to the claim-and- defend protocol described in this document. This restriction also applies to multicast packets. IPv4 packets with a Link-Local source address MUST NOT be forwarded outside the locallink,even if they have a multicast destination address. The link local address is not the only obstacle; VPC Lattice leverages the concept of a service network to cross the VPC and account boundaries, and a service network is constrained to a single region. Routing is not allowed and the virtual network that presents the lattice cannot be "stretched" from US-EAST to EU-NORTH . We need a new solution…. A network-only solution is not enough. The good thing is that F5 distributed cloud can create application-centric solutions with Distributed Cloud App Connect allowing us to abstract network addresses and network topologies and even preclude the need to directly connect the networks at all, which will increase the security posture of our deployment. To address this issue, we leverage F5 Distributed Cloud App Connect to present the Application/API to the EU site and present the VPC Lattice based application in US-EAST-1. First we create an Origin Pool on the US-EAST-1 site that points to VPC Lattice on the site. Next we map the pool to a load balancer Next we add a host that will match the DNS record we will use to stretch our lattice across sites and clouds. Finally we present that application to any site we want. If you have not already created the DNS records, you will need to do so. In my example, we will use AWS Route 53 and an internal zone and latency-based records that are tied to regions. We can then create an internal DNS record that points directly to our XC nodes inside interfaces or to an internal load balancer that then steers traffic to the XC nodes in EU-NORTH-1. On our EU-NORTH-1 test instance, we will not resolve the DNS name of f5-demo.stretched.app to one of the CE ENI IPs (each site has one or more CE nodes). [ec2-user@ip-10-200-210-196 ~]$ nslookup f5-demo.stretched.app Server: 10.200.0.2 Address: 10.200.0.2#53 Non-authoritative answer: Name: f5-demo.stretched.app Address: 10.200.255.24 Name: f5-demo.stretched.app Address: 10.200.1.164 Name: f5-demo.stretched.app Address: 10.200.11.75 [ec2-user@ip-10-200-210-196 ~]$ Trying to access the app over the F5 XC App Connect interface works as desired. VPC Lattice requires that we transform the headers to match the VPC Lattice hostname (this happens automatically by Distributed Cloud). If we look at the application response, we see the header transformation and the XFF header showing the client IPs and how the instance in US-EAST-1 directed the traffic into the link local range. In our discovery of VPC lattice services in US-EAST-1 we are presented with another challenge. The application consists of about 100 APIs, and they have a parallel deployment in US-WEST-2. The SLA requirement is that a single API being out of service in US-EAST-1 cannot force all APIs to move to US-WEST-2. The different APIs have different data stores and operational models. Some are active/active in both locations, and some are only active in one location at a time. To further complicate the SLA, the company has mandated that both regions (EAST and WEST) need to be in production at the same time. Not only do we need to support these APIs we need to do so at scale and make it transparent. In our first step we will create DNS records for the US-WEST-2 region pointing to the CE nodes. Now we will test access to the app in US-EAST-1, which will work because when we deployed the application in the Distributed Cloud console we selected the US-WEST-2 site (you can add/remove sites as necessary over time). Great! We have US-WEST-2 reach the US-EAST-1 Lattice app, let's bring up a US-WEST-2 origin. This can be an additional origin pool or it can be by adding servers from US-WEST-2 into the same origin pool. Example of how the VPC Lattice service is discovered and added to the pool via local DNS. Testing Connectivity to the US-WEST-2 service from US-WEST-2. Testing connectivity: Our EU-NORTH-1 client can send traffic to both US-EAST-1 and US-WEST-2 via the application presented on CE nodes in EU-NORTH-1 We now have a topology where VPC Lattice-based services in one region are exposed in other regions and we can create fault-tolerant patterns for failover, ensuring that if a given URL is offline in one site it can be steered to another site with minimal interruption. Stretching To the Edge What about AWS Outposts? LocalZones? WaveLength? The short answer is yes. F5 Distributed Cloud can be used to address multiple needs in these deployments. Such as: Providing Load Balancing to applications on these systems. Web Application Firewall services MultiCloud networking VPC Lattice access across regions Below you can see access from an Outpost in Frankfort connecting to my Lattice applications in the US by connecting through a CE node deployed on AWS Outpost. Global Networks and Security When we focus on connecting networks, we increase the span of our networks, increase their complexity, and increase the challenges in securing them. F5 Distributed Cloud offers network firewall capabilities for network centric solutions. When we move from network connectivity to application connectivity we limit the span of each network. Users connect to a local system, which is then configured to proxy application traffic. For these applications, we can deliver WAF services and visibility tools that span all environments and increase visibility. Reaching Across Clouds and the Internet Does this work across clouds? Yes. Does this work in a data center? Yes. Can I use F5 Distributed Cloud to present an application that has VPC Lattice based components to the internet similar to NGINX and BIG-IP ? Yes. Can you use F5 Distributed cloud to expose your applications without directly exposing your AWS environment to the internet? Yes. The F5 Global Network presents your application to the Internet and traffic will be tunneled to the appropriate location. This solution has the added benefit of Global Anycast, DoS /DDoS protections , andCDN capabilities. F5 Distributed Cloud provides unique capabilities that pay dividends. With network-centric and application-centric architectures, and the ability to leverage our global network or direct site-mesh connectivity, you can solve the challenges of interconnecting a hybrid/multi-cloud estate where, when, and how it is necessary.465Views0likes0CommentsA complete Multi-Cloud Networking walkthrough with F5 Distributed Cloud
F5 Distributed Cloud – Multi-Cloud Networking F5 Distributed Cloud (F5 XC) provides a Software-as-a-Service based platform to connect, deliver, secure, and operate your networks and applications across any environment. This walkthrough contains two sections. The first section uses F5 Distributed Cloud Network Connect to network across cloud locations and providers with simplified provisioning and end-to-end security. The second part uses F5 Distributed Cloud App Connect, and shows how to securely connect distributed workloads across cloud and edge locations with integrated app security. Distributed Cloud Network Connect Network Connect helps customers establish a multi-cloud networking fabric with end-to-end cloud orchestration, a gateway that implements L3-L7 functions to enforce network connectivity and security and a unified policy with central visibility for collaboration across NetOps & SecOps. 1. Deploy F5 XC Customer Edge Site(s) Step 1: Establish a multi-cloud networking fabric by deploying F5 XC Customer Edge (CE) sites (cloud, edge, on-prem) ➡️ See the following article and connected video to learn how to use the Distributed Cloud Console to deploy a CE in AWS and in Azure, and then how to route traffic between each of the sites. Using F5 Distributed Cloud Network Connect to transit, route, & secure private cloud environments ➡️ F5 XC can orchestrate private connectivity, including AWS PrivateLink, Azure CloudLink, and many other private transport providers. The following article covers this capability in greater detail. Using F5 Distributed Cloud private connectivity orchestration for secure multi-cloud infrastructure Step 2: Customers onboard required VPC/VNets to the F5 XC CE sites to participate in the multi-cloud fabric. F5 XC then orchestrates cloud networking constructs to attract traffic from these VPCs (termed as spokes) and then enforce L3-L7 network services. Cloud orchestration includes things such as creating AWS TGW, route table updates, setting up Azure VNet peering, configuring AWS direct connect -or- Azure Express Route and related resources to establish private connectivity and many more. ➡️ See the following series of articles to learn how to use the Infrastructure as Code utility Terraform to deploy and connect Distributed Cloud CE’s in AWS, Azure, and Google Cloud Overview & AWS Deployment with F5 Distributed Cloud Multi-Cloud Networking AWS to Azure via Layer 3 & Global Network with F5 Distributed Cloud Multi-Cloud Networking Demo Guide: A step-by-step walkthrough using Terraform with Distributed Cloud Network Connect in AWS MCN 1: Deploy a F5 XC CE Site MCN 2: Cookie cutter architecture - fully orchestrated: attach spoke VPC/VNets seamlessly. MCN 3: Sites deployed across the globe to establish a multi-cloud networking fabric. 2. Configure Network Segments in Distributed Cloud Step 1: Configure Network Segments. These Network Segments will provide an end-to-end global isolated network. MCN 4: Configure a global Network Segment Step 2: Associate F5 XC CE Sites (incl. VLANs/interfaces for on-prem/edge sites), onboarded VPCs/VNets to these network segments to create an isolated network within the multi-cloud networking fabric. ➡️ Steps 4, 6, and 10+ in the following article show how to connect the Distributed Cloud Global Network use it to route traffic between different CE Sites Using F5 Distributed Cloud Network Connect to transit, route, & secure private cloud environments 3. Define Security Policies Step 1: Define security policies such as forward proxy policies, network security policies, traffic policers for your entire multi-cloud networking fabric with the power of labels to easily express the intent without complexities such as IP addresses. MCN 5: Enhanced Firewall Policy with the power of labels 4. Integrate with 3rd Party NFV services such as Palo Alto Networks Firewall Step 1: Seamlessly provision NFV services such as Big-IP AWAF, Palo Alto Networks Firewall, into any F5 XC CE site MCN 6: Orchestrate 3rd party firewalls like Palo Alto Step 2: Use the power of labels to easily express the intent to steer traffic to these 3rd party NFV appliances. MCN 7: Seamlessly steer traffic towards 3rd party NFV services such as PAN firewall ➡️ Learn how to deploy a Palo Alto Firewall using Distributed Cloud and a Palo Alto Panorama server, and then redirect traffic to the firewall using Enhanced Firewall Policies Easily Deploy Your Palo Alto NGFW with F5 Distributed Cloud Services 5. Monitor & Troubleshoot your Network NetOps and SecOps can collaborate using a single platform to monitor & troubleshoot networking issues across the multi-cloud fabric. MCN 8: Powerful monitoring dashboards & troubleshooting tools for your entire secure multi-cloud network fabric. Distributed Cloud App Connect App Connect helps customers simply deliver applications across their multi-cloud networking fabric including the internet without worrying about underlying networking via the distributed proxy architecture with full self-service capability and application isolation via namespaces. 1. Establish a Secure Multi-Cloud Network Fabric Utilize Multi-Cloud Network Connect to deploy F5 XC CE sites in environments that host your applications. 2. Discover Any App running Anywhere Step 1: Simply discover all apps running across your environments by configuring service discoveries. Use DNS based service discovery to discover legacy apps and K8s/consul-based service discovery to discover modern apps. MCN 9: Discover apps in any environment - sample showing apps discovered in a K8s cluster. 3. Deliver Any App Anywhere, incl. the Public Internet Step 1: Configure a Load Balancer which will connect apps (Origins) discovered in any environment and then deliver it (Advertise) to any environment. MCN 10: Leverage distributed proxy architecture to connect an App running in Azure to AWS – without configuring ANY networking. Step 2: Apps can be delivered (Advertised) directly to the internet using F5 XC’s performant anycast global backbone, with DNS delegation & TLS cert management by simply selecting VIP advertisement as ‘Internet’. MCN 11: Live traffic graph showing seamlessly connecting App in Azure -> AWS and then delivering the App in AWS to the public internet. ➡️ Navigate each step of the process, from deploying CE’s to using App Connect to connect app services locally and advertise the frontend to the Internet. The following collection of articles use the Distributed Cloud Console to facilitate the deployment, and demonstrate how to automate the process using the Infrastructure as Code utility Terraform to orchestrate everything. Use F5 Distributed Cloud to Connect Apps Running in Multiple Clusters and Sites Azure & Layer 7 Networking with F5 Distributed Cloud Multi-Cloud Networking Demo Guide: Using Terraform to connect backend-send services via Distributed Cloud App Connect in Azure 4. Secure your Apps Step 1: Secure Apps with industry leading application security services such as WAF, Bot, L7 DoS, API security, client-side defense and many more with a single click. MCN 12: One click application security for all your applications – anywhere ➡️ The following demo guide shows how to deploy web app globally and secure it. Distributed Cloud WAAP + CDN Demo Guide 5. Monitor & Troubleshoot your Apps SecOps, NetOps and DevOps can collaborate using a single platform to monitor & troubleshoot application issues across the multi-cloud fabric. MCN 13: Performance & Security dashboards for every application namespace - each namespace contains many load balancers. MCN 14: Performance & Security dashboard for each Load Balancer MCN 15: Various other security & performance tools to help maintain a healthy secure performant multi-cloud application fabric. Conclusion Using the Network Connect and App Connect services in Distributed Cloud, it's easy to deploy, connect, and secure apps that run in multiple clouds. The F5 platform automatically handles the connectivity, routing, and allows customized access, enabling apps to be deployed globally or privately in just a few clicks. Additional Resources Distributed Cloud Network Connect Distributed Cloud App Connect Demo Guide: F5 XC MCN6.4KViews3likes1Comment