Using F5 Distributed Cloud private connectivity orchestration for secure multi-cloud infrastructure

Introduction

Enterprise businesses use modern apps that access services in many locations. Users running productivity apps, like Office365, must connect to services in the cloud from on-prem locations. To keep this running well, enterprises must provide connectivity that’s fast, reliable, and private.

Traditionally, it has taken many steps to create private connections to a public cloud subscription and route application specific traffic to it. F5 Distributed Cloud Platform orchestrates ExpressRoutes in Azure and Direct Connect services in AWS, eliminating many of the steps needed for routing end-to-end. Distributed Cloud private connectivity orchestration makes it easier than ever to connect and configure routing over existing private and dedicated circuits from on-prem locations to cloud services running in AWS and in Azure.

The illustration below outlines the basic components to an ExpressRoute service in Azure but there’s a lot more you’ll need to know about just under the cover. 

Without orchestration, many steps are needed to enable routing between on-prem sites and Azure. This requires expert knowledge of Azure Networking, numerous dependent resources to be built, and advanced routing protocols knowledge -- specifically the Border Gateway Protocol (BGP).

  1. Extend on-prem network to a colo provider
  2. Create and provision the ExpressRoute Circuit
  3. Create a Virtual Network Gateway
  4. Create a connection between ExpressRoute Circuit & Virtual Network Gateway (VNG)
  5. Configure a Route Server to propagate routes between VNG and on-prem
  6. Configure user-defined routes on each subnet on each VNet in Azure

Using Distributed Cloud to orchestrate ExpressRoutes in Azure and Direct Connect in AWS, the total number of steps is effectively reduced to just an essential few. Additional benefits include no longer needing to be an expert in Azure Networking or in BGP routing, and you get the ability to control connectivity with intent-based policies natively built into the Distributed Cloud Platform. An example of an intent-based policy is to configure VNet tagging in Azure to use with a firewall policy that just allow access to specific apps or by select users. Additional policies that support tagging include Distributed Cloud WAAP and Distributed Cloud App Infrastructure Protection.

The following details cover the key components needed to support direct connectivity and show how to create the services and deploy a privately routed app in Distributed Cloud.

Building ExpressRoute to Azure


  1. Extend on-prem network to a colo provider
  2. Create and provision the ExpressRoute Circuit
  3. Enable the ExpressRoute orchestration feature on an Azure VNet Site configured in Distributed Cloud

 

To create an ExpressRoute orchestrated configuration in Distributed Cloud, navigate to Multi-Cloud Network Connect > Site Management > Azure VNET Sites > Add Azure VNET Site or Manage Configuration for an existing Site. Enter the required parameters, and when you reach the “Ingress Gateway or Ingress/Egress Gateway”, select “Ingress/Egress Gateway (Two Interface) …””. Here you have the option to deploy on a Recommended Region or an Alternate Region. This selection depends entirely on your business’ cloud deployment model.


After choosing the model that best fits your environment, configure the number of Availability Zones for the Gateway and subnets (new/existing) that it will join and Apply the settings. Now scroll down to Advanced Options (enabling Advanced Fields) and Select VNet type: Hub VNet. Click “View Configuration”, and any existing VNet’s from your Azure Subscription that should inherit orchestrated routing. Next, change the “Express Route Configuration” to Enabled to expand the dropdown to access the ExpressRoute Circuit and Virtual Network Gateway settings.

Under “* Connections”, add the ExpressRoute Circuit configuration for your Azure subscription(s). The required fields are the Name and the Express Route Circuit, this is the Resource ID for the circuit in Azure.

Note: When configuring more than one circuit, you may want to also configure the Routing Weight for circuit preference.  When configuring an express route circuit from another subscription (not shown below), you’ll also need an Authorization Key.

For ease of deployment, it’s recommended to use the default values for the remaining fields, including for the Gateway SKU, Subnet for Azure VNet Gateway, and Subnet for Azure Route Server, including ASN Configuration for BGP between Site and Azure Route Servers.


After the configuration is fully saved and deployed, with site status Applied on the Cloud Sites page, all resources in Azure will now be set to use ExpressRoute Circuit(s) for all designated L3 routed traffic.

Next, we’ll configured the orchestration of Direct Connect in an AWS VPC connected site. AWS TGW connected sites are also support.

Building Direct Connect for AWS

To create a Direct Connect orchestrated configuration in Distributed Cloud, navigate to Multi-Cloud Network Connect > Site Management > AWS VPC Sites > Add AWS VPC Site or Manage Configuration for an existing Site. Enter the required parameters, and when you reach the “Ingress Gateway or Ingress/Egress Gateway”, choose the form factor that meets your deployment requirements. Scroll down to Advanced Configuration, enable Advanced Fields, and then Enable Direct Connect.

Configuring the Direct Connect connection feature, choose either Hosted VIF or Standard VIF mode. Use Hosted VIF when you’re already using the Direct Connect connection for other purposes in AWS or when the VIF is in another AWS subscription. Otherwise, choosing Standard VIF allows Distributed Cloud to automatically create the VIF, and dependent services in AWS mentioned below  to access the Direct Connect connection. Standard VIF mode creates the following additional resources in AWS:

  • Virtual Gateway (VGW): associating it to the VPC and enabling route propagation to inside route tables
  • Direct Connect Gateway (DCG): associating it to the VGW

Note: In Standard VIF mode, at the end of the deployment admins may copy the direct connect gateway ID and use it to create other VIF’s. Admins may also copy the ASN. This is the AWS side of the ASN that’s needed by network ops teams to configure BGP peering.

Note: In Hosted VIF mode, independent site deployment is responsible for:

  •             Creating VGW, and associating it to the VPC and enabling route propagation to inside route tables
  •             Creating DCG and associating it to the VGW
  •             Accepting the Hosted VIF and linking to the DCGW VIF

Optionally, you may configure the Custom ASN if needed to work with an existing BGP configuration or choose Auto to let Distributed Cloud figure it out. Apply the config, save changes, and exit to the general Sites page.

After the configuration is fully saved and deployed and having site status Applied on the Cloud Sites page, all resources in AWS will now be set to use the Direct Connect Gateway for all designated L3 routed traffic.

Adding Private Connectivity On-Prem

The final part to this deployment is routing both the ExpressRoute and Direct Connect circuits to an on-prem site. Both circuits must terminate at a colo space, and then standard IT/NetOps teams handle the routing outside the realm of Distributed Cloud to the destination.

Building a Distributed App w/ Private Connectivity

With Distributed Cloud having orchestrated the routing to each site’s workload, and IT/NetOps configured routing on-prem, including propagating the on-prem routes on BGP, an app with components that work independently can now be accessed as one unified interface.

An example of a distributed app that run perfectly in this environment is the demo app, Arcadia Finance. This app has four components:

  • Main – Frontend Web interface
  • API – An App module accessed by Main to support money transfers
  • Refer-A-Friend (Not used) – An App module interface accessed by Main to invite friends
  • Backend – A DB server that stores money transfer accounts used by the API module, stock portfolio positions used by the Main module, and email addresses saved by the Refer-A-Friend module.

Functionally, the connection flow is as follows:

  1. Users access a VIP advertised by an F5 Global Network Regional Edge to the Internet
  2. User traffic is connected to the Main (frontend) app running in AWS via the F5 Global Network
  3. Main App connects to API in Azure to load the money-transfer side frame, and then to the Backend DB on-prem to load the stocks portfolio balances. These connections transit the private connectivity links created in this article.
  4. API App in Azure connects to the Backend DB on-prem to retrieve money transfer accounts. This connection transits the private connectivity links created in this article.

To support this topology and configuration, the apps are divided and run as follows:

AWS

  • Frontend (nginx)
  • Main (Web)
  • Refer-A-Friend

Azure

  • API (App)

On-Prem

  • Backend (DB)

To make the app reachable to users, use the Distributed Cloud console Sites Distributed Apps feature to create one HTTP Load Balancer with the VIP advertised to the Internet, and with the origin pool of the Frontend (nginx) app. Note: This step assumes that you have previously created a fully connected AWS CE Site with connectivity to your VPC’s and a Direct Connect circuit in the section above.

Navigate to Multi-Cloud App Connect > Manage > Load Balancers > Origin Pools, create a new origin pool. In the pool creation menu, at the top, select “JSON” and change the format to YAML, then paste the following example, changing the specific values, such as the namespace, to match your environment:

metadata:
  name: mcn-aws-workload-pool
  namespace: mcn-privatelinks
  labels:
    ves.io/app_type: arcadia
  annotations: {}
  disable: false
spec:
  origin_servers:
    - private_ip:
        ip: 10.100.2.238
        site_locator:
          site:
            tenant: acmecorp-tnxbsial
            namespace: system
            name: soln-eng-aws-dc
            kind: site
        inside_network: {}
      labels: {}
  no_tls: {}
  port: 8000
  same_as_endpoint_port: {}
  loadbalancer_algorithm: LB_OVERRIDE
  endpoint_selection: LOCAL_PREFERRED

With the origin pool created, navigate to Distributed Apps > Manage > Load Balancers > HTTP Load Balancers, and add a new one with the following YAML provided as an example:

metadata:
  name: mcn-arcadia-frontend
  namespace: mcn-privatelinks
  labels:
    ves.io/app_type: arcadia
  annotations: {}
  disable: false
spec:
  domains:
    - mcn-arcadia-frontend.demo.internal
  http:
    dns_volterra_managed: false
    port: 80
  downstream_tls_certificate_expiration_timestamps: []
  advertise_on_public_default_vip: {}
  default_route_pools:
    - pool:
        tenant: acmecorp-tnxbsial
        namespace: mcn-privatelinks
        name: mcn-aws-workload-pool
        kind: origin_pool
      weight: 1
      priority: 1
      endpoint_subsets: {}

 

Internally verify end-to-end connectivity

Opening a command line shell to the Frontend Web App, a variation of traceroute with the tool hping3 and using curl, reveals each hop identified as privately connected along with connectivity established directly to the destination working without an intermediary.

The following IP addresses are used to support a TCP connection from the Frontend (Web) app running in AWS to the API app running in Azure:

  • 10.100.2.238 (Source): Frontend (Web)
  • 172.18.0.1: Container host node
  • 192.168.1.6: AWS Direct Connect Gateway
  • 192.168.1.5: On-Prem router
  • 192.168.1.22: Azure ExpressRoute Circuit endpoint
  • 10.101.1.5 (Destination): App (API)

In the CLI output, note each hop and the value in the HTTP Response header “Server”:

root@1e40062cb314:/etc/nginx# hping3 -ST -p 8080 api
HPING api (eth2 10.101.1.5): S set, 40 headers + 0 data bytes
hop=1 TTL 0 during transit from ip=172.18.0.1 name=UNKNOWN  
hop=1 hoprtt=7.7 ms
hop=2 TTL 0 during transit from ip=192.168.1.6 name=UNKNOWN  
hop=2 hoprtt=31.4 ms
hop=3 TTL 0 during transit from ip=192.168.1.5 name=UNKNOWN  
hop=3 hoprtt=67.3 ms
hop=4 TTL 0 during transit from ip=192.168.1.22 name=UNKNOWN  
hop=4 hoprtt=67.2 ms
^C
--- api hping statistic ---
8 packets transmitted, 4 packets received, 50% packet loss
round-trip min/avg/max = 7.7/43.4/67.3 ms
root@1e40062cb314:/etc/nginx# curl -v api:8080
* Rebuilt URL to: api:8080/
* Hostname was NOT found in DNS cache
*   Trying 10.101.1.5...
* Connected to api (10.101.1.5) port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: api:8080
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx/1.18.0 (Ubuntu) is not blacklisted
< Server: nginx/1.18.0 (Ubuntu)
< Date: Thu, 12 Jan 2023 18:59:32 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Fri, 11 Nov 2022 03:24:47 GMT
< Connection: keep-alive
< ETag: "636dc07f-264"
< Accept-Ranges: bytes

 

Conclusion

As more services continue to be deployed to and run in the cloud, dedicated, reliable, and secure private connectivity is increasingly required by Enterprises. Establishing connectivity is not a rudimentary task and requires the assistance of many hands in different departments. Distributed Cloud private connectivity orchestration helps streamline this process by eliminating many of the steps required in each cloud provider, including no longer requiring dedicated cloud and routing protocol experts just to configure these services manually.

To see all of this in action and to see how all the parts come together, watch the following video, a companion to this article.

Visit the following resources for more information about this feature and other Distributed Cloud services:

Multi-Cloud Network Connect Product Information

Direct Connect orchestration for AWS TGW Sites

Direct Connect orchestration for AWS VPC Sites

ExpressRoutes orchestration for Azure VNet Sites

YouTube Video

Updated Jun 08, 2023
Version 5.0

Was this article helpful?

No CommentsBe the first to comment