Deploy High-Availability and Latency-sensitive workloads with F5 Distributed Cloud

Introduction

F5 Distributed Cloud Services delivers virtual Kubernetes (vK8s) capabilities to simplify deployment and management of distributed workloads across multiple clouds and regions. At the core of this solution is Distributed Cloud's multi-cloud networking service, enabling connectivity between locations. In Distributed Cloud, every location is identified as a site, and K8s clusters running in multiple sites can be managed by the platform. This greatly simplifies the deployment and networking of infrastructure and workloads. Centralized databases require significant compute and memory resources and need to be configured with High Availability (HA). Meanwhile, latency-sensitive workloads require placement as close to an end-users’ region as possible. Distributed Cloud handles each scenario with a consistent approach to the app and infrastructure configuration, using multi-cloud networking with advanced mesh, and with Layer 4 and/or Layer 7 load balancing. It also protects the full application ecosystem with consistent and robust security policies.

While Regional Edge (RE) sites deliver many benefits of time-to-value and agility, there are many instances where customers may find it useful to deploy compute jobs in the location or region of their choice. This may be a cloud region, or a physical location in closer proximity to the other app services, or due to regulatory or other requirements such as lower latency. In addition, the RE deployments have more constraints in terms of pre-configured options for memory and compute power; in cases where it is necessary to deploy a workload demanding more resources or specific requirements such as high memory or compute, the Customer Edge (CE) deployment may be a better fit.

One of the most common scenarios for such a demanding workload is database deployment in a High-Availability (HA) configuration. An example would be a PostgreSQL database deployed across several compute nodes running within a Kubernetes environment, which is a perfect fit for a CE deployment. We’ll break down this specific example in the content that follows with links to other resources useful in such undertaking. 

Deployment architecture

F5 Distributed Cloud Services provide a mechanism to easily deploy Kubernetes apps by using virtual Kubernetes (vK8s), which helps to distribute app services across a global network while making them available closer to users. You can easily combine RE and CE sites in one vK8s deployment to ease application management and securely communicate between regional deployments and backend applications.

Configuration of our CE starts with the deployment of F5 CE Site, which provides ways to easily connect and manage the multi-cloud infrastructure. The Distributed Cloud CE Site works with other CE and F5-provided RE Sites, which results in a robust distributed app infrastructure with full mesh connectivity, and ease of management as if it were a single K8S cluster.

From architecture standpoint a centralized backend or database deployed in a CE Site provides an ideal platform that other sites can connect with. We can provision several nodes in a CE for a high-availability configuration for a PostgreSQL database cluster. The services within this cluster can then be exposed to other app services, such as deployments in RE sites, by way of a TCP load balancer. Thus, the app services that consume database objects could reside close to the end-user if they are deployed in F5 Distributed Cloud Regional Edge, resulting in the following optimized architecture:


Prepare environment for HA Load

F5 Distributed Cloud Services allows creating customer edge sites with worker nodes on a wide variety of cloud providers: AWS, Azure, GCP. The pre-requisite is a Distributed Cloud CE Site or App Stack, and once deployed, you can expose the services created on these edge sites via a Site mesh and any additional Load Balancers. A single App Stack edge Site may support one or more virtual sites, which is similar to a logical grouping of site resources.

A single virtual site can be deployed across multiple CEs, thus creating a multi-cloud infrastructure. It’s also possible to place several virtual sites into one CE, each with their own policy settings for more granular security and app service management. It is also feasible for several virtual sites to share both the same and different CE sites as underlying resources.

During the creation of sites & virtual sites labels such as site name, site type and others can be used to organize site resources.

The shows how VK8S clusters can be deployed across multiple CEs with virtual sites to control distributed cloud infrastructure. Note that this architecture shows four virtual clusters assigned to CE sites in different ways.


In our example, we can start by creating a AWS VPC site with worker nodes do as described here. When the site is created, the label must be assigned. Use the ves.io/siteName label to name the site. Follow these instructions to configure the site.

As soon as edge site is created and the label is assigned, create a virtual site, as described here. The virtual site should be of the type CE and the label must be ves.io/siteName with operation == and the name of the AWS VPC site. Note the virtual site name, as it will be required later.

At this point, our edge site for the HA Database deployment is ready. Now create the VK8S cluster. Select both virtual sites (one on CE and one on RE) by using the corresponding label: The all-res one will be used for the deployment of workloads on all RE’s.

Environment for both RE and CE deployments is ready.

Deploy HA Postgres to CE

We will use Helm charts to deploy a PostgreSQL cluster configuration with help of Bitnami, which provides ready-made Helm charts for HA databases: MongoDB, MariaDB, PostgreSQL, etc. in the following repository: https://charts.bitnami.com/bitnami. In general, these Helm charts work very similarly, so the example used here can be applied to most other databases or services.

An important key in values for the database is clusterDomain. The value is constructed this way: {sitename}.{tenant_id}.tenant.local. Note that site_id here is Edge site id, not the virtual one. You can get this information from site settings. Open the JSON settings of the site in AWS VPC Site list. Tenant id and site name will be shown as tenant and name fields of the object:


 

VK8S supports only non-root containers, so these values must be specified:
  containerSecurityContext:
      runAsNonRoot: true

To deploy the load to a predefined virtual site, specify:
  commonAnnotations:
    ves.io/virtual-sites: "{namespace}/{virtual site name}"

When deployed, HA Database exposes its connection via a set of services. For PostgresDB the service name might look like: ha-postgres-postgresql-ha-postgresql, on port 5432. To review the services list of the deployments, select Services tab of the VK8S cluster. Even though RE deployment and CE deployment are in one VK8S namespace, they are not accessible directly. Services need to be first exposed as Load Balancers.

Expose CE service to RE deployment

To access HA Database deployed to CE site, we will need to expose the database service via a TCP Load Balancer. TCP Load Balancer is created by using the Origin Pool. To create the Origin Pool for vk8s deployed service follow these instructions. As soon as Origin Pool is ready, TCP Load Balancer can be created, as described here. This load balancer needs to be accessible only from the RE network, or in other words to be advertised there. Therefore, when creating TCP Load Balancer specify “Advertise Custom” for “Where to Advertise the VIP” field. Click “Configure” and select “vK8s Service Network on RE” for “Select Where to Advertise” field, as well as “Virtual Site Reference” and “ves-io-shared/ves-io-all-res“ for subsequent settings.

Also, make sure to specify domain name for the “Domain” field. This makes it possible to access the service via the TCP Load Balancer domain and port. If the domain is specified as re2ce.internal and port is 5432, the connection to the DB might be performed from the RE using these settings.

RE to CE Connectivity


At this point, the HA Database Workload is deployed to the CE environment. This workload implements a central data storage, which takes advantage of compute-intensive resources provided by the CE. While the CE is an ideal fit for compute-heavy operations, it is typically optimized for a single region of the cloud where the CE is deployed. This architecture could be complemented by a multi-region architecture where end-users from regions other than the CE may reduce latency delays by adding regional edge services by moving some of the data and compute capability off of the CE and to the RE close to the end-users’ region.

Moving services with data access points to the edge raises questions of caching and updates propagation. The ideal use-cases for such services are around not overly compute-heavy  but rather time- and latency-sensitive workloads – those that require decision-making at the compute edge. These edge services still require secure connectivity back to core, and in our case we can stand up a mock service in the Regional Edge to consume the data from the centralized Customer Edge and present it to end-users.

The NGINX reverse-proxy server is a handy solution to implement data access decisions on the edge. NGINX has several plugins, allowing access to backend systems via HTTP protocol. PostgreSQL does not provide such an adapter natively, but NGINX has a module just for that: NGINX OpenResty can be compiled with Postgres HTTP module, allowing to do GET/POST requests to access and modify data. To enable access to Postgres database the upstream tag is used this way:

upstream database {
        postgres_server  re2ce.internal dbname=haservicesdb       
                         user=haservices password=haservicespass;
}

As soon as the upstream setup, the queries can be performed:
location /data {    
          postgres_pass     database;
          postgres_query    "SELECT * FROM articles";
}

Unfortunately, postgres_query and postgres_pass does not support caching, so additional proxy_pass needs to be configured:
location / {
            rds_json on;
            proxy_buffering on;
            proxy_cache srv;
            proxy_ignore_headers Cache-Control;
            proxy_cache_methods GET HEAD POST;         
            proxy_cache_valid  200 302  30s;
            proxy_cache_valid  404      10s;
            proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504 http_429;
            add_header X-Cache-Status $upstream_cache_status;
            proxy_pass http://localhost:8080/data;         
}

Note the additional rds_json directive above, it's used to convert the response from binary to JSON. Now that the data is cached on the Regional Edge, when the central server is unavailable or inaccessible, the cached response is returned. This is an ideal situation for how a distributed multi-region app may be designed and connected, where the deployment on RE creates a service accessible to the end-users via a Load Balancer.


Enhanced Security Posture with Distributed Cloud

Of course, we’re using NGINX with the PostgreSQL module for illustration purposes only; exposing databases this way in production is not secure. However, this gives us an opportunity to think through how publicly accessible service endpoints can potentially be open to attacks. A Web App Firewall (WAF) is provided as part of the Web App & API Protection (WAAP) sets of services within F5 Distributed Cloud and secure all of the services exposed in our architecture with a consistent set of protection and controls. For example, with just a few clicks, we can protect the Load Balancer that exposes an external web port to end-users on the RE using WAF and bot protection services. Similarly, other services on the CE can also be protected with the same consistent security policies.

Monitoring & Visibility

All of the networking, performance, and security data and analytics are readily available to send-users within F5 Distributed Cloud dashboards. For our example it is a list of all connections from RE to CE, via the TCP load balancer detailed for each RE site:


Another useful data point is a chart and detail of HTTP load balancer requests:


Conclusion

In summary, the success of a distributed cloud architecture is dependent on placing the right types of workloads on the right cloud infrastructure. F5 Distributed Cloud provides and securely connects various types of distributed app-ready infrastructure, which we used in our example such as the Customer Edge and Regional Edge. A compute-heavy centralized database workload that requires high availability can take advantage of vK8s for ease of deployment and config with Helm charts, scalability, and control. The CE workload can then be exposed via Load Balancers to other services, deployed in other clouds or regions, such as the Regional Edge service we utilized here. All of the distributed cloud infrastructure, networking, security and insights are available in one place with F5 Distributed Cloud services.

Additional Material

Now that you've seen how to build our solution, try it out for yourself!

Product Simulator: A guided simulation in a sandbox environment covering each step in this solution
Demo Guide: A comprehensive package, including a step-by-step guide and the images needed to walk through this solution every step of the way in your own environment. This includes the scripts needed to automate a deployment, including the images that support the sample application.

Links

 

Published Feb 14, 2023
Version 1.0

Was this article helpful?

No CommentsBe the first to comment