Use F5 Distributed Cloud to Connect Apps Running in Multiple Clusters and Sites

Introduction

Modern apps are comprised of many smaller components and can take advantage of today’s agile computing landscape. One of the challenges IT Admins and Security Operations face is securely controlling access to all the components of distributed apps while business development grows or changes hands with mergers and acquisitions, or as contracts change. F5 Distributed Cloud (F5XC) makes it very easy to provide uniform access to distributed apps regardless of where the components live.


Solution Overview

Arcadia Finance is a distributed app with modules that run in multiple Kubernetes clusters and in multiple locations. To expedite development in a key part of the Arcadia Finance distributed app, the business has decided to outsource work on the Refer A Friend module. IT Ops must now relocate the Refer A Friend module to a separate location exclusive to the new contractor where its team of developers have access to work on it. Because the app is modular, IT has shared a copy of the Refer A Friend container to the developer, and now that it is up and running in the new site, traffic to the module needs to transition away from the one that had been developed in house to the one now managed by the contractor.

Logical Topology

Distributed App Overview

The Refer A Friend endpoint is called by the Arcadia Finance frontend pod in Kubernetes (K8s) when a user of the service wants to invite a friend to join. The pod does this by making an HTTP request to the location “refer-a-friend.demo.internal/app3/”. The endpoint “refer-a-friend.demo.internal” is registered to the K8s cluster with an F5XC HTTP Load Balancer policy with its VIP advertised internally to specific sites, including the K8s cluster. F5XC uses the cluster’s K8s API to register services and make them available anywhere within the customer tenant’s configured global network.
 
Three sites are used by the company that owns Arcadia Finance to deliver the distributed app. The core of the app lives in a K8s cluster in Azure, the administration and monitoring of the app is in the customer’s legacy site in AWS. To maintain security, the new contractor only has access to GCP where they’ll continue developing the Refer A Friend module. An F5XC global virtual network connects all three sites, and all three sites are in a site mesh group to streamline communication between the different app modules.

Steps to deploy

To reach the app externally, an HTTP Load Balancer policy is configured using an origin pool that connects to the K8s “frontend” service, and the origin pool uses a Kubernetes Site in F5XC to access the frontend service.

 

 

 

 

A second HTTP Load Balancer policy is configured with its origin pool, a static IP that lives in Azure and is accessed via a registered Azure VNET Site.

 

 

When the Refer A Friend module is needed, a pod in the K8s cluster connects to the Refer A Friend internal VIP advertised by the HTTP Load Balancer policy. This connection is then tunneled by F5XC to an endpoint where the module runs.

With development to the Refer A Friend module turned over to the contractor, we only need to change the HTTP Load Balancer policy to use an origin pool located in the contractor’s Cloud GCP VPC Site.

The origin policy for the GCP located module is nearly identical to the one used in Azure.

Now when a user in the Arcadia App goes to refer a friend, the callout the app makes is now routed to the new location where it is managed and run by the new contractor.

Demo

Watch the following video for information about this solution and a walkthrough using the steps above in the F5 Distributed Cloud Console.

Conclusion

Using F5 Distributed Cloud with modern day distributed apps, it’s almost too easy to route requests intended for a specific module to a new location regardless of the provider and provider specific requirements or the IP space the new module runs in. This is the true power of using F5 Distributed Cloud to glue together modern day distributed apps.

Updated Feb 06, 2023
Version 2.0
No CommentsBe the first to comment