Seamless App Connectivity with F5 and Nutanix Cloud Services

Modern enterprise networks face the challenge of connecting diverse cloud services and partner ecosystems, expanding routing domains and distributed application connections. As network complexity grows, the need to onboard applications quickly become paramount despite intricate networking operations. Automation tools like Ansible help manage network and security devices. But it is still hard to connect distributed applications across new systems like clouds and Kubernetes. 

This article explores F5 Distributed Cloud Services (XC), which facilitates intent-based application-to-application connectivity across varied infrastructures. 

Expanding and increasingly sophisticated enterprise networks

Traditional data center networks typically rely on VLAN-isolated architectures with firewall ACLs for intra-network security. With digital transformation and cloud migrations, new systems work with existing networks. They move from VLANs to labels and namespaces like Kubernetes and Red Hat OpenShift. Future trends point to increased edge computing adoption, further complicating network infrastructures. 

Managing these complexities raises costs due to design, implementation, and testing impacts of network changes. 

Network infrastructure is becoming more complex due to factors such as increased NAT settings, routing adjustments, added ACLs, and the need to remove duplicate IP addresses when integrating new and existing networks. This growing complexity drives up the costs of design, implementation, and testing, amplifying the impact of network changes

A typical network uses firewalls to control access between VLANs or isolated networks based on security policies. Access lists (ACLs) are organized by source and destination applications or clients. The order of ACLs is important because shorter prefixes can override longer ones. In some cases, ACLs exceed 10,000 lines, making it difficult to identify which rules impact application connectivity. As a result, ACLs for discontinued applications often remain in the firewall configuration, posing potential security risks. However, refactoring ACLs requires significant effort and testing.

 

F5 XC addresses the ACL issue by managing all application connectivity across on-premises, data centers, and clouds. Its load balancer connects applications through VLANs, VPCs, and other networks. Each load balancer has its own application control and associated ACLs, ensuring changes to one load balancer don’t affect others. When an application is discontinued, its load balancer is deleted, preventing leftover configurations and reducing security risks.

SNAT/DNAT is commonly used to resolve IP conflicts, but it reduces observability since each NAT table between the client and application must be referenced. ACL implementations vary across router/firewall vendors—some apply filters before NAT, others after. Logs may follow this behavior, making IP address normalization difficult. Pre-NAT IP addresses must be managed across the entire network, adding complexity and increasing costs.

F5 XC simplifies distributed load balancing by terminating client sessions and forwarding them to endpoints. Clients and endpoints only need IP reachability to the CE interface. The load balancer can use a VIP for client access, but global management isn't required, as the client-side network sets it. It provides clear source/destination data, making application access easy to track without managing NAT IP addresses.

 

F5 Distributed Cloud MCN/Network as a Service

F5XC offers an all-in-one software package for networking and security, managed through a single console with intent-based configuration. This eliminates the need for separate appliance setups.  Furthermore, firewall, load balancer, WAF changes, and function updates are done via SDN without causing downtime.

The F5 XC Customer Edge (CE) works across physical appliances, VMs, containers, and clouds, creating overlay tunnels between CEs. Acting as an application gateway, it isolates routing domains and terminates L3 routing, while maintaining best practices for cloud, Kubernetes, and on-prem networks. This simplifies physical network configurations, speeding up provisioning and reducing costs.

 

How CE works in enterprise network

This design is straightforward: CEs are deployed in the cloud and the user’s data center. In this example, the data center CE connects four networks:

  • Underlay network for CE-to-CE connectivity
  • Cloud application network (VPC/VNET)
  • Branch office network
  • Data center application network

The data center CE connects the branch office and application networks to their respective VRFs, while the cloud CE links the cloud network to a VRF.

The application runs on VLAN 100 (192.168.1.1) in the data center, while clients are in the branch office. The load balancer configuration is as follows:

  • Load balancer: Branch office VRF
  • Domain: ent-app1.com
  • Endpoint (Origin pool): Onpre-VLAN100 VRF, 192.168.1.1

Clients access "ent-app1.com," which resolves to a VIP or CE interface address. The CE identifies the application’s location, and since the endpoint is in the same CE within the Onpre-VLAN100 VRF, it sets the source IP as the CE and forwards traffic to 192.168.1.1.

Next, the application is migrated to the cloud. The CE facilitates this transition from on-premises to the cloud by specifying the endpoint. The load balancer configuration is as follows:

  • Load Balancer: Branch Office VRF
  • Domain: ent-app1.com
  • Endpoint (Origin Pool): AWS-VPC VRF, 10.0.0.1

When traffic arrives at the CE from a client in the branch office, the CE in the data center forwards the traffic to the CE on AWS through an overlay tunnel. The only configuration change required is updating the endpoint via the console. There is no need to modify the firewall, switch, or router in the underlay network.

Kubernetes networking is unique because it abstracts IP addresses. Applications use FQDNs instead of IP addresses and are isolated by namespace or label. When accessing the external network, Kubernetes uses SNAT to map the IP address to that of the Kubernetes node, masking the source of the traffic.

CE can run as a pod within a Kubernetes namespace. Combining a CE on Kubernetes with a CE on-premises offers flexible external access to Kubernetes applications without complex configurations like Multus. F5XC simplifies connectivity by providing a common configuration for linking Kubernetes to on-premises and cloud environments via a load balancer.

The load balancer configuration is as follows:

  • Load Balancer: Kubernetes
  • Domain: ent-app1.com
  • Endpoint (Origin Pool): Onprem-vlan100 VRF, 192.168.1.1 / AWS-VPC VRF, 10.0.0.1

 

* Related blog https://community.f5.com/kb/technicalarticles/multi-cluster-multi-cloud-networking-for-k8s-with-f5-distributed-cloud-%E2%80%93-archite/307125

 

Load balancer observability

The Load Balancer in F5XC offers advanced observability features, including end-to-end network latency, L7 request logs, and API visualization. It automatically aggregates data from various logs and metrics, (see below).

You can also generate API graphs from the aggregated data. This allows users to identify which APIs are in use and create policies to block unnecessary ones.

 

Demo movie

The video demonstration shows how F5 XC connects applications across different environments, including VM, Kubernetes, on-premises, and cloud. CE delivers both L3 and L7 connectivity across these environments.

 

Conclusion

F5 Distributed Cloud Services (F5XC) simplifies enterprise networking by integrating applications seamlessly across VMs, Kubernetes, on-premises, and cloud environments. By minimizing physical network modifications and leveraging best practices from diverse infrastructure systems, F5XC enables efficient, scalable, and secure application connectivity.

Key Benefits:

-          Simplified physical network design

-          Network as a Service for seamless application communication and visibility

-          Integration with new networking paradigms like Kubernetes and SASE

Looking Ahead

As new networking systems emerge, F5XC remains adaptable, leveraging APIs for self-service network configurations and enabling future-ready enterprise networks.

Updated Oct 04, 2024
Version 2.0
No CommentsBe the first to comment