This article wouldn’t have been the same without the efforts of @Fouad_Chmainy, @Matt_Dierick, and Alexis Da Costa. They are the original authors of the distributed design, the Sentence app, and the NGINX Plus OIDC image optimized for Distributed Cloud. Additionally, special thanks to @Cody_Green and @Kevin_Reynolds for inspiration and assistance in the Terraform portion of the solution. Thanks, guys!
Table Of Contents:
An important and long-standing need for enterprise storage is the ability to recover from disasters through both rapid and easy access to constantly replicated data volumes. Beyond reducing corporate downtime from recovery events, the replicated volumes are also critical for cloning purposes to facilitate items such as research into data trends or to perform advanced analytics on enterprise data.
A modern need exists to quickly replicate data across a wide breadth of sites, with diversity in the major cloud providers to be leveraged, providers such as AWS, Azure, and Google. The ability to simultaneously replicate critical data to multiple of these hyperscalers prevents a major industry concern, that of vendor lock in. Modern data stores must be efficiently and quickly saved to, and acted upon, using whichever cloud provider an enterprise desires. Principal reasons for this hybrid cloud requirement include maximizing return on investment by shopping for attractive price points or more 9’s of reliability.
Although major cloud providers may have individual, unique VPN-style solutions to support data replication, for example Microsoft Azure VPN Gateway deployments, selecting concurrent, differing solutions can quickly become an administrative burden. Each cloud provider offers slightly distinctive networking and security wares. A critical concern is the shortage of advanced skill sets often required to maintain configuration and diagnostic processes in place for competing cloud storage solutions. With flux to be expected in staffing, the long-term cost of trying to stitch disparate cloud technologies into one cohesive offering for the enterprise has been difficult.
This is the precise multi-cloud strategy where F5 Distributed Cloud (XC) can complement industry leading enterprise-grade storage solutions from a major player like NetApp. With F5 Distributed Cloud Network Connect, multiple points of presence of an enterprise, including on-prem data centers and a multitude of cloud properties, are seamlessly tied together through a multi-cloud network (MCN) offering that leverages a 20 Tbps backbone. Service turn up measured in minutes, not days.
An excellent, complementary use of the F5 XC hybrid secure network offering is NetApp’s modern approach to managing enterprise data estates, NetApp BlueXP. This unified, cloud-based control plane from NetApp allows an enterprise to manage volumes both on-prem and in major cloud providers and in turn set up high-value services like data replication. Congruent to the simple workflows F5 XC delivers for secure networking setup, NetApp BlueXP also consists of intuitive workflows. For instance, simply drag one volume onto another volume on a point-and-click working canvas and standard SnapMirror is enacted. F5 XC can underpin the connectivity requirement of a multi-cloud hybrid environment by handling truly seamless and secure network communications.
The first step in demonstrating the F5 and NetApp solutions working in concert to provide efficient disaster recovery of enterprise volumes was to set up F5 XC customer edge (CE) sites within Azure, AWS and On-Prem data center locations. The CE is a security demarcation point, a virtualized or dedicated server appliance, allowing highly controlled access to key enterprise resources from specific locales selected by the enterprise. For instance, a typical CE deployment for MCN purposes is a 2-port device with inside ports permitting selective access to important networks and resources.
Each CE will automatically multi-home to geographically close F5 regional edge (RE) sites, no administrative burden in incurred and no networking command line workflows need be learned, CE deployments are wizard-based workflows with automatic encrypted tunnels established. The following screenshot demonstrates in the red highlighted area that a sample Azure CE site freshly deployed in the Azure Americas-2 region has automatic encrypted tunnels set up to New York and Washington, DC RE nodes.
Regardless of the site, be it an AWS VPC, an AWS services VPC supporting transit gateway (TGW), Azure VNET or an on-prem location, the net result is always a rapid setup with redundant auto-tunneling to the F5 international fabric provided by the global RE network. Other CE attachment models can be invoked, such as direct site-to-site connectivity that bypasses the RE network, however the focus of this document is the most prevalent approach which harnesses the uptime and bandwidth advantages offered by RE gluing together of customer sites.
With connectivity available between inside interfaces of deployed CEs, standard firewall rules easily added, as well as service insertion of third party NGFW technology such as Palo Alto firewall instances, the plumbing to efficiently interconnect NetApp volumes for on-going replication is now possible.
The objective for the F5 XC deployment was to utilize the NetworkConnect module, to effectively allow layer 3 connectivity between inside ports of CEs regardless of site type. In other words, connectivity between networked resources at on-prem sites or AWS sites or Azure sites, are all enabled quickly with a consistent and simple guided workflow. The practical application of this layer-3 style of MCN that NetworkConnect allows was connectivity of NetApp volumes, as depicted in the following diagram.
A widely embraced enterprise-class file storage offering is the industry-leading NetApp ONTAP solution. When deployed on-prem, the solution allows shared file storage, often using NFS or SMB protocols for file storage, frequently with multiple nodes used to create a storage cluster. Although originally hardware appliance-oriented in nature, modern incarnations of on-prem ONTAP solutions can easily and frequently utilize virtualized appliances.
Both NetApp and F5, in keeping with modern control plane industry trends, have moved towards a centralized, portal-based approach to configuration, whether it be storage appliances (NetApp) or multi cloud networking (F5). This SaaS approach to configuration and monitoring means control plane software is always up-to-date and requires no day-to-day management. In the case of NetApp, this modern control plane is instantiated with the BlueXP cloud-based portal.
The sample BlueXP canvas displayed above demonstrates the diversity of data estate entities that can be managed from one workspace, with volumes both on-premises and AWS cloud-based, along with Amazon S3 storage seen.
NetApp offers a widely used cloud-based implementation of file storage, Cloud Volumes ONTAP (CVO) which serves as an excellent repository for replicating traditional on-premises volumes. In the demonstration environment both AWS and Azure were harnessed to quickly set up CVO instances. For BlueXP to establish a workspace involving a managed CVO instance, a “Connector” is deployed in the AWS VPC or Azure VNet. This connector is the entity which facilitates the BlueXP control plane management functions for hybrid-cloud storage.
Upon establishing on premises to AWS and Azure connectivity, enabled by the F5 Secure XC Customer Edge (CE) nodes deployed at sites, a vast and mature range of features are provided to the BlueXP operator.
As highlighted above, a core function of the BlueXP services is replication, in this workspace one can see the on-premises cluster being replicated automatically to an Azure CVO instance.
The result of combining the F5 Distributed Cloud multi-cloud networking support with the NetApp ability to safeguard mission critical enterprise data, anywhere, was found to be a smooth, intuitive set of guided configuration steps. Within an hour, protected inside networks were established in two popular cloud providers, AWS and Azure, as well as in an existing on premises data center. With the connectivity encrypted and standard firewall rules available, including the option to run data flows through inline third-party NGFW instances, the focus upon practical usage of the cloud infrastructure could commence.
A multi-site file storage solution was deployed using the NetApp BlueXP SaaS console, whereby an on premises ONTAP cluster received local files through the NFS protocol. To demonstrate the value of a multi-cloud deployment, the F5 XC NetworkConnect module allowed real-time file replication of the on-prem cluster contents to separate and independent volumes securely located within an AWS VPC and Azure VNet, respectively. Using F5 XC, the target networks within the cloud providers were highly secured, only permitting access from the data center.
The net result is a solution that can accommodate disaster recovery requirements, for instance a clone of the AWS or Azure volumes could be created and utilized for business continuity in the event of data corruption or disk failure on premises. Other use cases would be to clone the cloud-based volumes for research and development purposes, analytics, and further backup purposes that could utilize snapshotting or imaging of the data. The inherent redundancy offered by using multiple, secured cloud instances could be enhanced easily by expanding to other hyperscalers, for instance Google Cloud Platform when business purposes dictate such a configuration is prudent.
A simple and intuitive simulator is available to walk users quickly through the setup of an F5 Distributed Cloud MCN deployment such as the one reflected in this article. The simulator can be found here.
For a complete, comprehensive walk-through of F5 Distributed Coud Multi-Cloud Networking MCN, including setup steps, please see this DevCentral article - Multi-Cloud Networking Walkthrough with Distributed Cloud.
F5 Distributed Cloud services (XC) provide full REST APIs to enable automation of the deployment and management of multi-cloud infrastructure. Organizations looking to implement infrastructure-as-code operations for modern apps, distribute and secure multi-cloud deployments can utilize and adapt the Terraform and Ansible scripts in the many articles on F5 DevCentral that cover automation topics for F5 Distributed Cloud. Typically these scripts automate and help to consistently :
This article focuses on only the Deliver part of the distributed app lifecycle, where using Terraform script with F5 Distributed Cloud Services organizations can easily deploy and configure multi-cloud networking & app connectivity of their distributed applications that span across:
The easiest place to get started with Automation of Multi-Cloud Networking (MCN) and Edge Compute scenarios is by cloning the corresponding GitHub repositories from the Demo Guides, which include sample applications and provide opportunities to see automation scripts in action. The Terraform scripts within the following Demo Guides can be used as a template to quickly customize to your organization’s requirements to automate repetitive tasks or the creation of resources with just a quick update of variables unique to your environment to customize automation actions.
Multi-cloud networking use-cases Demo Guide where you can use Terraform to enable connectivity for multiple clouds and explore using HTTP and TCP load balancers to connect the provided sample application. You can use the provided scripts in the GitHub repositories to deploy the required sample app, and other components representative of a traditional 3-tier app architecture (backend + database + frontend).
Furthermore, the scripts provide flexibility of choosing the target clouds (AWS or Azure or both), which you can adapt to your environment and app topologies based on which clouds the different app services should be deployed to. Use the guide to get familiar with how to update variables for each cloud configuration, so that you can further customize to your environment to help automate and simplify deployment of the networking topologies across Azure and AWS, ultimately saving time and effort.
Edge Compute for Multi-cloud Apps Demo Guide where Terraform scripts help automate deployment of the application infrastructure across AWS (sample app and other components representative of a traditional 3-tier app architecture – backend, database, frontend). The result is a multi-cloud architecture, with components deployed on Microsoft Azure and Amazon AWS.
By adapting the included Terraform script you can easily deploy and securely network app services to create a distributed app model that spans across:
In the process you get familiar with the configuration of TCP and HTTP Load Balancers, create a vK8s that spans multiple locations / clouds, and deploy distributed app services across those locations with the help of the Terraform scripts.
Deploying high-availability configurations Demo Guide is an important resource for getting familiar with and automating High-Availability (HA) configuration of backend resources. In this guide, as an example, you can use a PostgreSQL database HA deployment on a CE (Customer Edge), which is a common use-case leveraging F5 Distributed Cloud Customer Edge for deploying a backend. First, deploy the AWS Site environment, followed by deployment of a vK8s, and then customizing and running Bitnami Helm chart to configure a multi-node PostgreSQL deployment.
Of course, you can leverage this type of automation with a Helm chart of your choice to configure a different backend resource or database type. Adapt to your environment with a few changes to the script variables, and feel free to combine with scripts from the other two guides to deploy the app(s) and configure networking (MCN) should you choose to automate the entire workflow.
Terraform scripts represent ready-to-use code, which you can easily adapt to your own apps, environments, and services or extend as needed. The baseline for most scripts is using the Volterra Provider with required edits / updates of the variables in Terraform. These variables are special elements that allow us to store and pass values to different aspects of modules without changing the code in the main configuration file. Variables allow the flexibility of updating the settings and parameters of the infrastructure, and it facilitates its configuration and support.
Variables are stored and can be found in .tf files of the respective folders. Using the Deploying high-availability configurations Demo Guide as example, you change the environment variable values related to your app, which you can find in the terraform folder and the application subfolder. Open the var.tf file to update the values:
More detailed information on variables can be found here.
In summary, Demo Guide repositories include Terraform scripts used to help automate different operations, including deployment of the environment required for the sample distributed app, as well as deploying the app itself. You can take a closer look at the Demo Guide use-cases together with their respective Terraform scripts, run a quick test to get familiar with the use-case, and then adapt the scripts to your environment and your applications.
Whether your app has high availability requirements or distributed multi-cloud infrastructure, using Terraform with F5 Distributed Cloud Services can simplify deployment, automate infrastructure on any cloud and save time and effort managing and securing app resources in any cloud or data center.
To add to Nikoolayy1's comment, F5 can generate API Definitions within XC, and we are working on integration for BIG-IP and nginx deployments. This will allow traffic that does not use XC for client traffic to provide the Swagger/OpenAPI files and security assessments available for XC Load Balancers.
Currently, you use a logger on the proxy (BIG-IP or nginx) to gather the request and response data and that is then sent in to XC via a separate service. The advantage here is that you don't need to change the traffic flow of your client traffic.
If you're interested in learning more, please PM me or reach out to your local account team.
An open group to foster discourse around the integration of security, networking, and application management services across public/private cloud and network edge compute services.