One of Volterra's core features is to provide a simple way to interconnect resources in multiple clouds creating a seamless communication space for application services. In conjunction with centralized control and management plane, it becomes a single pane of glass for configuration, security, and visibility eliminating excessive administration overhead of managing multiple clouds. This article gives an overview of Volterra components and features, which contribute to the multi-cloud transit story. Even though those components are the foundation for many other Volterra services, the focus of this article is traffic transit.
There are many location options to deploy applications or their parts. Public clouds, private clouds, CDNs, ADNs, and so on. All of them have their pros and cons or even unique capabilities that force owners to distribute application components evenly. Even though it sounds like the right decision, it brings significant overhead to developers and DevOps teams. The reason for it is total platforms incompatibility. This not only requires a separate set of knowledge for each platform but also custom solutions to stitch them together to create a boundless and secure communication space.
Volterra solves this challenge by providing a unified platform that allows creating remote application deployment sites and securely connecting them via adding to a virtual global network. The article below reviews Volterra components that help to interconnect clouds together.
From a multi-cloud networking perspective, Volterra is a fast and efficient backbone so-called Volterra global network. It allows to push traffic through with multi-terabit rates and connect remote sites using overlay networks. The backbone consists of two main components. The first is 21 points of presence (PoPs) located all over the world allowing to connect customer sites to as close PoP as possible. The second is redundant private connections in between PoPs, to tier-1 carriers, public cloud providers, SaaS services providing high-speed, reliable traffic transfer with much less latency than it is over the Internet. The picture below represents the logical topology of Volterra connecting multiple remote sites.
Volterra global network forms a physical network infrastructure that is securely shared by all tenants/customers. Users can create overlay networks on top of it to interconnect resources in remote sites or expose application services to the Internet.
As per Volterra documentation "Site is a physical or cloud location where Volterra Nodes are deployed". That is true, however, it doesn't give much clarity from a multi-cloud transit point of view. From that perspective, a site is a remote network that uses a cluster of Volterra node(s) as a gateway to connect it back to the Volterra global network. In simple words, if Volterra global network is a trunk then sites are the leaves.
Volterra unifies and simplifies the approach to create and interconnect sites whether they locate in a public cloud or in a private data center. The DevOps team doesn’t need to manually manage resources in a remote location. Instead, Volterra automatically creates a remote network and deploys Volterra node(s) to attach it back to the global backbone. Currently, Volterra supports sites in the following site locations:
After a site is registered against VoltConsole and nodes are deployed networking and application delivery configuration is unified regardless of site's type.
To better understand how inter-site connectivity is organized it is useful to take a closer look at Volterra nodes. Volterra nodes are Linux-based software appliances that act as a K8S cluster and form a super-converged infrastructure to deliver networking and application delivery services. The picture below represents a logical diagram of services running on the Volterra node.
A cluster of nodes locates at the edge of a site and either run customer workloads or acts as a gateway to interconnect site-local resources to the Volterra global network.
Volterra site can be deployed in two modes on-a-stick or default gateway. With on-a-stick mode, the ingress and egress traffic from the node is handled on a single network interface as shown in the diagram.
In this mode, the network functionality will be either load balancer, a gateway for Kubernetes, API gateway, a generic proxy. Mostly this mode is used as a stub site to run application workloads in using VoltMesh services such as virtual K8s.
With default-gateway mode, the network functions like routing and firewall between inside and outside networks can be enabled in addition to the functionality available in on-a-stick mode.
This mode is suitable not only for running workloads but also to act as a router and forward traffic to internal resources deployed behind the site.
Once Volterra nodes are deployed across the sites then inter-site connectivity can be configured via VoltConsole. The picture below shows a logical diagram of all components participating in the connectivity setup.
Connectivity configuration includes three steps. First of all, virtual networks have to be created with proper subnetting. Following network types are available:
Then those networks have to be attached to one of the Volterra node network interfaces. There are two types of interfaces - “physical” and “logical” that can be configured:
Lastly, a network connector defines connectivity type between networks. There are three types of network connectors:
Mixing network type with network connectors gives a user a wide variety of options to let resources located in one site talk to resources in the other site or even to the Internet and opposite. To get an example or practice configuring inter-site connectivity take a look at the step-by-step videos (link1, Link2) or use a simulator (link1, link2).