Multi-Cloud Networking with F5 Distributed Cloud
One of F5 Distributed Cloud's core features is to provide a simple way to interconnect resources in multiple clouds creating a seamless communication space for application services. In conjunction with centralized control and management plane, it becomes a single pane of glass for configuration, security, and visibility eliminating excessive administration overhead of managing multiple clouds. This article gives an overview of the F5 Distributed Cloud components and features, which contribute to the multi-cloud transit story. Even though those components are the foundation for many other F5 Distributed Cloud Services, the focus of this article is traffic transit.
There are many location options to deploy applications or their parts. Public clouds, private clouds, CDNs, ADNs, and so on. All of them have their pros and cons or even unique capabilities that force owners to distribute application components evenly. Even though it sounds like the right decision, it brings significant overhead to developers and DevOps teams. The reason for it is total platforms incompatibility. This not only requires a separate set of knowledge for each platform but also custom solutions to stitch them together to create a boundless and secure communication space.
F5 Distributed Cloud solves this challenge by providing a unified platform that allows creating remote application deployment sites and securely connecting them via adding to a virtual global network. The article below reviews the F5 Distributed Cloud components that help to interconnect clouds together.
F5 Distributed Cloud Global Network
From a multi-cloud networking perspective, F5 Distributed Cloud is a fast and efficient backbone so-called F5 Distributed Cloud global network. It allows to push traffic through with multi-terabit rates and connect remote sites using overlay networks. The backbone consists of two main components. The first is 21 points of presence (PoPs) located all over the world allowing to connect customer sites to as close PoP as possible. The second is redundant private connections in between PoPs, to tier-1 carriers, public cloud providers, SaaS services providing high-speed, reliable traffic transfer with much less latency than it is over the Internet. The picture below represents the logical topology of the F5 Distributed Cloud connecting multiple remote sites.
F5 Distributed Cloud global network forms a physical network infrastructure that is securely shared by all tenants/customers. Users can create overlay networks on top of it to interconnect resources in remote sites or expose application services to the Internet.
F5 Distributed Cloud Site
As per F5 Distributed Cloud documentation "Site is a physical or cloud location where F5 Distributed Cloud Nodes are deployed". That is true, however, it doesn't give much clarity from a multi-cloud transit point of view. From that perspective, a site is a remote network that uses a cluster of the F5 Distributed Cloud node(s) as a gateway to connect it back to the F5 Distributed Cloud global network. In simple words, if the F5 Distributed Cloud global network is a trunk then sites are the leaves.
F5 Distributed Cloud unifies and simplifies the approach to create and interconnect sites whether they locate in a public cloud or in a private data center. The DevOps team doesn’t need to manually manage resources in a remote location. Instead, the F5 Distributed Cloud automatically creates a remote network and deploys F5 Distributed Cloud node(s) to attach it back to the global backbone. Currently, F5 Distributed Cloud supports sites in the following site locations:
- AWS VPC
- AWS TGW
- GCP VPC
- Azure VNET
- Physical Data Center
- Edge location
After a site is registered against F5 Distributed Cloud Console and nodes are deployed networking and application delivery configuration is unified regardless of site's type.
F5 Distributed Cloud Node
To better understand how inter-site connectivity is organized it is useful to take a closer look at the F5 Distributed Cloud nodes. F5 Distributed Cloud nodes are Linux-based software appliances that act as a K8S cluster and form a super-converged infrastructure to deliver networking and application delivery services. The picture below represents a logical diagram of services running on a F5 Distributed Cloud node.
A cluster of nodes locates at the edge of a site and either run customer workloads or acts as a gateway to interconnect site-local resources to the F5 Distributed Cloud global network.
Site Deployment Modes
F5 Distributed Cloud sites can be deployed in two modes on-a-stick or default gateway. With on-a-stick mode, the ingress and egress traffic from the node is handled on a single network interface as shown in the diagram.
In this mode, the network functionality will be either load balancer, a gateway for Kubernetes, API gateway, a generic proxy. Mostly this mode is used as a stub site to run application workloads in using F5 Distributed Cloud Mesh services such as virtual K8s.
With default-gateway mode, the network functions like routing and firewall between inside and outside networks can be enabled in addition to the functionality available in on-a-stick mode.
This mode is suitable not only for running workloads but also to act as a router and forward traffic to internal resources deployed behind the site.
Inter-Site Networking
Once the F5 Distributed Cloud nodes are deployed across the sites then inter-site connectivity can be configured via F5 Distributed Cloud Console. The picture below shows a logical diagram of all components participating in the connectivity setup.
Connectivity configuration includes three steps. First of all, virtual networks have to be created with proper subnetting. Following network types are available:
- Per-Site - Even though this network has been configured once globally, it is instantiated as an independent network on each of the site. As a result, this network that is instantiated on one site cannot talk to the same network instantiated on another site.
- Global - This network is exactly the same across all sites on which it is instantiated. Any endpoint that is a member of this network can talk to other members irrespective of what site they belong.
- Site-Local : There can only be one network of this type on a particular site and is automatically configured by the system during bootstrap. This can also be considered as a site-local outside network and needs access to the public internet for registration during the site bring-up process.
- Site-Local-Inside: There can only be one network of this type on a particular site and is automatically configured during bootstrap for sites with two interfaces (default gateway).
- Public: This is a conceptual virtual network that represents the Public Internet. It is only present on the F5 Distributed Cloud Regional Edge sites and not on customer sites.
- Site-Local-Service: This is another conceptual virtual network and it represents the service network of Kubernetes within the site. This is the network from which cluster IP for k8s service configuration is allocated.
- VER-Internal: This is another conceptual virtual network and it represents where all control plane services for the site exist. This is only used if the control plane needs access to tenant services. For example, a site needs access to customers’ secrets that are stored in Hashicorp Vault running on another site.
Then those networks have to be attached to one of the F5 Distributed Cloud node network interfaces. There are two types of interfaces - “physical” and “logical” that can be configured:
- For every networking device (eg. eth0, eth1) on a node within a site, a “physical” interface can be configured.
- “Logical” interfaces are child interfaces of a physical interface, for example, VLANs.
Lastly, a network connector defines connectivity type between networks. There are three types of network connectors:
- Direct: inside-network and outside-network are connected directly. Endpoints in each virtual network can initiate connections to others. If inside or outside are connected to other external networks, then they can be configured with dynamic routing to exchange routes (currently, only BGP is supported for dynamic routing)
- SNAT: inside-network is connected using SNAT to outside-network. Endpoints from inside-network can initiate connections to outside-network. However, endpoints from outside-network cannot initiate connections to the inside-network.
- Forward-Proxy: Along with SNAT, a forward proxy can be configured. This enables the capability of inspection of HTTP and TLS connections. Using a forward proxy, URL filtering and visibility of different hosts accessed from the inside network can be configured.
Mixing network type with network connectors gives a user a wide variety of options to let resources located in one site talk to resources in the other site or even to the Internet and opposite. To get an example or practice configuring inter-site connectivity take a look at the step-by-step videos (link1, Link2) or use a simulator (link1, link2).