VMware Cloud for AWS - BIG-IP in Single-Site, Hybrid, and Multi-Cloud Deployments

Introduction

This is article 2 of 2. The aim is to provide useful information for planning a VMC/multi-cloud deployment, for example when creating an HLD (High Level Design) document. For information about VMC and relevant aspects for BIG-IP please see the previous article VMware Cloud for AWS - Networking and High Availability.

VMC uses NSX-T for networking but currently AWS only allows a single Tier-1 Gateway hence limiting the networking topologies possible. In this blog post we describe a suggested topology for BIG-IP in VMC for AWS. This baseline arrangement can be used in the multi-cloud sample topology exposed. A mention to VMware’s HCX migration tool is also done.

BIG-IP in a single site

Out of the 4 topologies described In the F5 BIG-IP deployment guide for NSX-T the customers are currently constrained to Topology D which uses SNAT by default. This topology is shown in the next diagram.

In this sample topology, we create a typical 3-tier architecture with Frontend (External Service), Application (Internal Service) and Database tiers. Notice that the Database Tier is configured as “Disconnected” to provide an additional layer of secure by means of controlling the access through a VIP in the BIG-IP.

The above topology can be expanded with multiple BIG-IP Scale-N clusters. This would allow isolation between the different Business Units or departments, each one with their own BIG-IP Scale-N cluster. Out of scope of this blog but worth to remind is that in these multi-cluster BIG-IP deployments (in a single or multiple sites) BIG-IQ can be used for global visibility across sites and centralized management.

Using EC2 workloads

From the point of view of the BIP-IPs, VMC is just another routing environment where it can also access EC2 workloads. These workloads can be dynamically incorporated in BIG-IP’s configuration by means of using AS3’s Service Discovery feature. Moreover, reachability of the VMs is the same either from VMC to VPC or vice versa. The same applies to the Internet access. This opens the following dilemmas:

  • Where to place the BIG-IPs?
  • Where to place the Internet Gateway?

There is no definitive answer. We can choose whether we want each functionality in the AWS VPC or in the VMC side. This is shown in the next figure. The decision should consider the following aspects:

  • At time of this writing, using an AWS IGW instead of an IGW via VMC has the possibility of using ELBs which provides Advanced Shield capabilities.
  • The cost will depend where we have more traffic and where we have more compute resources.

Using HCX

VMware's HCX covers several migration-related use cases including Disaster Recovery. HCX's Network Extension capability permits keeping the same IP and MAC addresses during a VM migration. This minimizes service disruption and is transparent to all devices including BIG-IP. Furthermore, HCX doesn’t mandate how the services are exposed externally therefore GSLB is always a valid option and will provide greater flexibility compared to a plain routing option.

BIG-IP in Multi-cloud

Multi-cloud allows for many use cases, as a consequence, many designs are possible. Ultimately the design will be highly dependent on the applications and on the databases, which most of the times require replication across sites. From the point of BIG-IP there are very few restrictions.

Next we will describe two multi-cloud scenarios:

  • A hybrid design focused in local data retention implemented with a single site plus cloud bursting.
  • A generic multi-cloud design that can be applied to any public cloud or private data centers.

Single site with cloud bursting design

The topology to be described next is suitable for smaller deployments or when data must be stored on-premises, usually because of data retention policies or regulations. This can be observed in the next figure where the DB Tier is not stretched to the Public Cloud.

In this architecture the On-premises data center is stretched to a public cloud when load conditions require increasing the compute needs. In this scenario Internet access is kept in the On-premises data center. It requires the use of a high performance Direct Connect link with low latency. This is usually within the metropolitan area of the On-premises facility. This Direct Connect circuit needs to be established once and its capacity increased ahead of the peak periods. Some housing vendors allow to change circuit’s capacity programmatically.

When compute changes dynamically, it is a perfect fit for F5’s Service Discovery feature of AS3, automatically populating the pools with the added or removed computing instances. Please check the clouddocs.f5.com site for this and other automation options. 

Generic multi-Cloud design

In general, F5 recommends Global Server Load Balancing (GSLB) for multi-cloud because it has the following benefits: cross-cloud vendor, name based/high degree of control, stickiness and IP intelligence. GSLB is available by F5 in two form factors: Software as a Service (SaaS) with F5 Cloud Services’ DNS LB service and Self-managed with F5 BIG-IP’s DNS module.

Designs depend on the applications and on databases. Inter-site dependencies play a crucial role. This guide recommends following the next design principles to minimize cost and maximize reliability while keeping simplicity in mind:

  • Typically, ADCs like BIG-IP deal with Frontend-tier and App-tier servers which should not have to talk with peers in other sites. These tiers have the most throughput and latency demands so inter-site communication should be avoided. Otherwise, this could incur in uneven performance and increased and unnecessary costs.
  • Identify strictly necessary inter-site dependencies. The typical case is DB replication which has much less throughput demands. Also, latency is a lesser issue because replication often happens asynchronously.
  • There are other very relevant sources of inter-site traffic such as Automation, VM migration and data-store replication (for example a repository of images). VMware’s HCX traffic fits in this category.
  • The first two items in this list deal with traffic that is generated upon client requests (blue arrows in the figure below). On the other hand, the third item is a new category of traffic (orange arrows) that is not expected to have dependencies when handling an ongoing customer request. Another characteristic of this traffic is that its traffic demands will greatly depend on frequency of updates in the applications.
  • Simpler sites are easier to manage, scale, and replicate. GSLB allows for distribution of workloads based on a site’s or a service’s load and capacity so it is perfectly fine to have differently sized data centers. The most important attribute is to have them architecturally equal. Automations that are cross-cloud vendor capable are advised.

Using BIG-IP DNS and following the above guidelines we can create a cross-cloud vendor solution using GSLB. This is shown in the next figure.

Probably the most remarkable aspect of the diagram are the network dependencies and demands which drive the design. In this diagram Inter-site dependency is reduced to the minimum, typically DB replication only.

We can also see that there is additional inter-site traffic like the BIG-IP DNS iQuery (used for service discovery and health probing) but this traffic is different in nature because it is failure tolerant.

In the design above, the DNS functionality is implemented in a standalone BIG-IPs because redundancy is accomplished by having an independent BIG-IP DNS at each site. Having this BIG-IP DNS separated from the BIG-IP Scale-N cluster that handles client traffic gives clarity in the diagram and more relevantly sets a clear demarcation of functions. If desired, the BIG-IP DNS functionality can be consolidated in the BIG-IP Scale-N cluster at each site. At extract cost, BIG-IP DNS could be placed in Internet exchanges. This allows:

  • To be closer to the clients. This only slightly improves DNS performance since client’s local DNS resolvers usually reply from their DNS cache.
  • To have a closer view to client’s network performance and reachability to the clouds. This is very relevant.

At the end of the day all designs have their PROs and CONs and a balancing act has to be done. In any case simplicity should always be priority. With respect to this, BIG-IP DNS has very little constains and greatly simplifies any existing deployment by having automatic service discovery

Conclusion

BIG-IP integrates in VMC likewise in NSX-T by using routing. In the case of VMC on AWS at present there are limitations which inhibit using the same topologies than on the private clouds. BIG-IQ can be leveraged to simplify the management of multiple BIG-IPs in the same or multiple sites.

GSLB is king for multi-cloud deployments. It is cross cloud vendor and provides greater flexibility and functionality over plain routed options.

Multi-cloud is a wide topic and we refer to the F5 BIG-IP deployment guide for NSX-T for more detailed discussion on the topics described in this blog.

 

 

Published Dec 01, 2020
Version 1.0

Was this article helpful?