Demo Guide: HA for Distributed Apps with F5 Distributed Cloud Services (SaaS Console, Automation)

F5 Distributed Cloud Services offers virtual Kubernetes (vK8s), which can be deployed on a Customer Edge (CE) location in multiple Availability Zones (AZ) for High Availability (HA). This approach provides customers with incredible agility for bringing online modern distributed apps in High Availability configurations, and for managing networking between different app services to optimize for performance and security.

This demo guide showcases the deployment and management of a distributed application architecture, where the database workload is running in a vK8s deployed on a Site with multiple zones for HA. In addition, a separate deployment on F5 Distributed Cloud Regional Edge (RE) locations is used to test connectivity to the database.

There is a GitHub repo that currently supports the deployment of services for this demo guide which includes:

The general flow of this HA distributed app services consists of the following steps:

  1. Create an Amazon AWS VPC
  2. Set up a CE Site with three AZs across three VMs as HA Nodes
  3. Configure a virtual site and vk8s to extend XC k8s services on AWS Node
  4. Run Helm chart to deploy a PostrgreSQL database (representing HA workload)
  5. Sets up multiple REs with NGINX module to showcase reading DB data from the CE and delivering app services with low-latency to regional users
  6. Configure TCP Load Balancer to expose RE workload to the internet for end-user access

Customer Edge (CE) runs on top of cloud provider worker nodes, with Site configuration providing support for deployment across multiple cloud providers. Virtual sites can be deployed across multiple CEs, creating a multi-cloud infrastructure, with the ability to share underlying resources. This way vK8s can be used to distribute app services globally and reduce time-to-value with Helm charts automation and orchestration provided by F5 Distributed Cloud Services.

Regional Edge (RE) sites provide agility and time-to-value, and most importantly low-latency performance of app services in locations closest to end-users. The deployment model targeting K8s remains the same, which provides flexibility to deploy compute jobs in any location (CE, RE, or even on-premises with App Stack), based on customer preference and factors such as regulatory compliance.

The resulting solution leverages multi-cloud networking (MCN) to enable secure connectivity between locations and expose the services to other app components that can be deployed and connected from other clouds or locations. Combining RE and CE sites within a single vK8s deployment simplifies application management and secure communication between regional and backend applications. TCP Load Balancer is used to enable access to HA database in a CE from a RE location.

F5 Distributed Cloud Services CE Site with a centralized backend running on a virtual Kubernetes simplifies deployment and management of distributed workloads and creates an ideal platform for other app services to connect into. With robust networking infrastructure and workload connectivity with advanced mesh networking, Layer 4 and Layer 7 load balancing, and robust security policies, including enhanced security posture provided by Distributed Cloud Web App & API Protection (WAAP), organizations can optimize resource allocation, ensure high availability, and enhance security. 

In summary, this demo guide supports showcasing a modern distributed app infrastructure that offers centralized control and monitoring, while making it easier to manage distributed cloud infrastructure and helping find the optimal balance between having an HA centralized backend and latency-sensitive front-end needing proximity to end-users.

For more details see:

Updated Aug 16, 2023
Version 5.0

Was this article helpful?

No CommentsBe the first to comment