Multi-port support for HTTP/TCP load balancers in F5 Distributed Cloud (XC)

Overview:

In the ever-evolving landscape of the digital world driven by innovation, catering to the new requirements is vital for modern application scalability, adaptability, and longevity. Multi-port support refers to the capability of a system to handle and manage multiple application ports simultaneously. This flexibility is particularly important in scenarios where a single device needs to serve diverse services.

Multi-port support is essential for various reasons, including some of the below:

  1. Parallel Processing: It allows the system to process multiple app streams concurrently, enhancing efficiency and reducing latency.
  2. Diverse Services: Different applications or services often require dedicated ports to function. Multi-port support enables a system to accommodate a variety of services simultaneously.
  3. Load Balancing: Distributing application traffic across multiple ports helps balance the load, preventing bottlenecks and optimizing resource utilization.
  4. Security: Sometimes SecOps want to have testing ports opened, which allow access to applications for testing, scanning, monitoring, and addressing potential security vulnerabilities.
  5. Flexibility: Systems with multi-port support are adaptable to modern micro-service-based architectures, supporting a diverse range of applications and services.
  6. IP limitations: Since IP’s are limited, customers don’t want to use a different IP for each user, so instead they want to reserve a single IP and want to distribute load on different ports.

Note: For today’s demonstration, we have deployed multiple demo applications like JuiceShop, DVWA, NGINX, F5 Air as micro-services on multiple systems/ports to showcase the capabilities of multi-port support and their deployment steps are out of scope in this article.

 

Let’s unravel three below real-world use cases of multi-port support and how it can be implemented in F5 Distributed Cloud (F5 XC) in easy-to-follow steps.

 

Use case I – Multiple Ports

In this use case, let’s assume the customer already has onboarded his backend application as an origin pool in XC. Next, the customer wants to access the same application using multiple ports, either for genuine access or for testing.

For achieving this use case, follow below steps:

  1. Login to F5 XC console and navigate to “Distributed Apps” --> “Manage Load balancer” section
  2. For this use case, create a HTTP load balancer with your backend application, needed ports in csv format, type as HTTP, name, domain name as shown below.
    NOTE: Provide only unused ports or you will run into port conflict errors. Also configure DNS records as per your setup.


  3. Once load balancer is created successfully, validate your application is accessible on the configured port and LB domain name

     

Use case II – Port Range 

In this scenario, customers have the requirement to access an application in a range of ports either for parallel processing or load balancing.

For configuration, follow below steps:

  1. Login to F5 XC console and navigate to “Distributed Apps” section
  2. For this use case, create a HTTPS load balancer with your backend application, needed port range and domain name as shown below.
    NOTE: Provide only unused port range to avoid port conflict error.
  3. Validate your application is accessible on configured ports just like below
 

Use case III – Origin Pool Dynamic port 

In this requirement, the backend application port should be dynamic and is dependent on the load balancer access port number. Let’s say a customer has multiple services running on multiple ports and wants users to access these services using a single TCP load balancer.

To meet this solution, follow steps below:

  1. Login to F5 XC console and navigate to “Distributed Apps” section
  2. Next, move to “Origin Pool” section and onboard your basic backend application details and select the "origin server port" option as the "loadbalancer port" (as shown below). We can also configure health checks to LB ports instead of endpoints for better visibility.
  3. We are halfway there!!. Move to “TCP Load balancer” section and create a TCP load balancer with required port ranges and your application origin pool. Your configuration will look something like below
  4. Finally for the fun part: Once load balancer comes to a READY state, open a browser and make sure different services are accessible on configured domain name and ports shown below
     
    NOTE: For above solution to work, multiple services should be running on the configured ports of backend system and this port range should be unused by other services on the XC platform


We have just scratched the surface of the the wide range of use cases of multi-port and there is a lot of demand in the market for many scenarios combining different functionalities of HTTP/HTTPS/TCP, single/multi services on same system or multiple backend systems and can also be routed to appropriate backends using port range filters in routes. As per customer requirements, appropriate configurations can be done on F5 XC for seamless integration and to leverage the pervasive WAAP security ecosystem.

 

Conclusion:

Winding up, this article pondered the market demand for the support of multi-port range in HTTP/TCP load balancers and then we took you on a roller coaster ride of different use cases. Finally, we also demonstrated how F5 XC can foster in shaping and optimizing your application versatile multi-port requirements.

 

Ever wondered what is F5 XC and how it acts as a “Guardian of Applications”, check below links:

Updated Dec 15, 2023
Version 2.0
No CommentsBe the first to comment