distributed cloud
16 TopicsF5 Distributed Cloud Customer Edge on F5 rSeries – Reference Architecture
Traditionally, to advertise an application to the internet or to connect applications across multi-cloud environments enterprises must configure and manage multiple networking and security devices from different vendors in the DMZ of the data center. CE on F5 rSeries is a single vendor, converged solution for all enterprise multi-cloud application connectivity and security needs.1.1KViews2likes2CommentsHow I Did it - Migrating Applications to Nutanix NC2 with F5 Distributed Cloud Secure Multicloud Networking
In this edition of "How I Did it", we will explore how F5 Distributed Cloud Services (XC) enables seamless application extension and migration from an on-premises environment to Nutanix NC2 clusters.699Views4likes0CommentsExperience the power of F5 NGINX One with feature demos
Introduction Introducing F5 NGINX One, a comprehensive solution designed to enhance business operations significantly through improved reliability and performance. At the core of NGINX One is our data plane, which is built on our world-class, lightweight, and high-performance NGINX software. This foundation provides robust traffic management solutions that are essential for modern digital businesses. These solutions include API Gateway, Content Caching, Load Balancing, and Policy Enforcement. NGINX One includes a user-friendly, SaaS-based NGINX One Console that provides essential telemetry and overseas operations without requiring custom development or infrastructure changes. This visibility empowers teams to promptly address customer experience, security vulnerabilities, network performance, and compliance concerns. NGINX One's deployment across various environments empowers businesses to enhance their operations with improved reliability and performance. It is a versatile tool for strengthening operational efficiency, security posture, and overall digital experience. NGINX One has several promising features on the horizon. Let's highlight three key features: Monitor Certificates and CVEs, Editing and Update Configurations, and Config Sync Groups. Let's delve into these in details. Monitor Certificates and CVE’s: One of NGINX One's standout features is its ability to monitor Common Vulnerabilities and Exposures (CVEs) and Certificate status. This functionality is crucial for maintaining application security integrity in a continually evolving threat landscape. The CVE and Certificate Monitoring capability of NGINX One enables teams to: Prioritize Remediation Efforts: With an accurate and up-to-date database of CVEs and a comprehensive certificate monitoring system, NGINX One assists teams in prioritizing vulnerabilities and certificate issues according to their severity, guaranteeing that essential security concerns are addressed without delay. Maintain Compliance: Continuous monitoring for CVEs and certificates ensures that applications comply with security standards and regulations, crucial for industries subject to stringent compliance mandates. Edit and Update Configurations: This feature empowers users to efficiently edit configurations and perform updates directly within the NGINX One Console interface. With Configuration Editing, you can: Make Configuration Changes: Quickly adapt to changing application demands by modifying configurations, ensuring optimal performance and security. Simplify Management: Eliminate the need to SSH directly into each instance to edit or update configurations. Reduce Errors: The intuitive interface minimizes potential errors in configuration changes, enhancing reliability by offering helpful recommendations. Enhance Automation with NGINX One SaaS Console: Integrates seamlessly into CI/CD and GitOps workflows, including GitHub, through a comprehensive set of APIs. Config Sync Groups: The Config Sync Group feature is invaluable for environments running multiple NGINX instances. This feature ensures consistent configurations across all instances, enhancing application reliability and reducing administrative overhead. The Config Sync Group capability offers: Automated Synchronization: Configurations are seamlessly synchronized across NGINX instances, guaranteeing that all applications operate with the most current and secure settings. When a configuration sync group already has a defined configuration, it will be automatically pushed to instances as they join. Scalability Support: Organizations can easily incorporate new NGINX instances without compromising configuration integrity as their infrastructure expands. Minimized Configuration Drift: This feature is crucial for maintaining consistency across environments and preventing potential application errors or vulnerabilities from configuration discrepancies. Conclusion NGINX One Cloud Console redefines digital monitoring and management by combining all the NGINX core capabilities and use cases. This all-encompassing platform is equipped with sophisticated features to simplify user interaction, drastically cut operational overhead and expenses, bolster security protocols, and broaden operational adaptability. Read our announcement blog for moredetails on the launch. To explore the platform’s capabilities and see it in action, we invite you to tune in to our webinar on September 25th. This is a great opportunity to witness firsthand how NGINX One can revolutionize your digital monitoring and management strategies.599Views4likes0CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization499Views1like0CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.499Views5likes2CommentsHow to use F5 Distributed Cloud for API Discovery
In today's digital landscape, APIs are crucial for integrating diverse applications and services by enabling seamless communication and data sharing between systems. API discovery involves finding, exploring, and assessing APIs for their suitability in applications, considering their varied sources and functionalities. F5 Distributed provides multiple architectures that make it ridiculously easy to discover APIs.199Views1like1CommentF5 Distributed Cloud Customer Edge Migration Centos to RHEL
In this article, I will introduce a process to migrate a Customer Edge site from End of Life Centos OS to RHEL Operating System. Introduction: Back in December 2023, F5 Distributed Cloud Customer Edges image was based on Red Hat Enterprise Linux or RHEL. Operating System Prior to that the Customer Edge ran on Centos 7.x Operating System, which has been announced End of Life . In this article, I will provide a migration strategy from Centos to RHEL OS for customer edge sites that are in a SaaS-Hybrid Edge Deployment pattern (#2 in the slide below) where the VIP is on the Regional Edge and the tunnel termination and SNAT are on the customer edge. While we are using this deployment pattern as an example, the concepts for other patterns are the same with a few caveats which I will include at the end of this article. High-Level Concepts: Before we discuss the migration phases, I want to introduce a few concepts that we will be utilizing. The first concept is what we call a Virtual Site. A virtual Site provides us the ability to perform a given configuration on set (or group) of Sites. The second term is Origin Pool. An origin pool is a mechanism to configure a set of endpoints grouped together into a resource pool used in the load balancer configuration. The typical CE Site deployment consists of a HA cluster that discovers endpoints via a origin pool picked via the CE Site. This discovery is typically via Private DNS or RFC-1918 IP ranges, all though other methods are available. When we introduce the virtual site construct we will perform this discovery via a "Virtual Site" and not the original "CE Site". As depicted below on the right hand side of the drawing, you will see the origin pool is now discovered from all 6 nodes in the virtual site and will route traffic to the endpoint per the LB algorithm. Also, the Virtual Site construct can be utilized for more advanced HA design scenarios and even for additional bandwidth between RE and CE, but this will be discussed in other articles. Virtual Site Setup: Perquisites: Current Centos Customer Edge Site. New RHEL OS Customer Site We first start to setup the virtual site construct by logging into our Distributed Cloud tenant. Once logged in: Navigate to "Shared Configuration" Under "Manage" chose "Virtual Site" Provide a Name, Description, Site Type (in this case CE), and a Site Expression Once the Virtual Site label is created, we navigate to the existing Centos CE Cluster and add the Site Expression that we created in previous step to the site Labels section Goto Multi-Cloud Network Connect tile Goto "Manage" "Site Management" and choose the Site, Cloud Deployment site, or Secure Mesh Site. This will depend on how and where the site was deployed. Once you have the correct site click on the 3 ellipses at the right and go to Manage Configuration and Edit Add virtual Site Label Type in the Key from “Site Selector Expression” my example is ”netta-az-vsite” and click Assign a Custom Key ‘netta-az-vsite’ Type in Value from “Site Selector Expression” my example is ”true” and click Assign a Custom Key ‘true’ Proceed with adding this same label to all sites that will be in the virtual site. Virtual Site Origin Pool Configuration: Now that we have our virtual site configured, we need to configure the origin pool and discover from the virtual site. Go to Multi-Cloud Application Connect In origin pool configuration choose the discovery method, IP or DNS of Origin on given sites Under Site or Virtual Site, choose Virtual Site and pick your virtual site from drop-down: Choose the "Virtual Site" configured in the previous step. Rest of config should be the same Validate origin is successfully discovered from newly created Virtual Site. Go to HTTP LB Performance Click on Origins Servers and you should see 2 origins, one form each site (centos and rhel) in virtual site Migration: Now that we have the virtual site and the virtual site origin pool discovery method built, we can start the migration. Goto the HTTP LB and add the additional virtual site origin pool under the Origins section Leverage weights and Priorities with the 2 origin pools to start the migration from the Centos Site to the Virtual site origin pool. Typical starting point is both origin pools will have a Priority of 1 and Weight will be in a value to equal 100. SO Centos origin pool has a weight of 95 and Virtual Site Origin Pool 5 and decrement and increment both as you migrate. Once 100% of traffic is on the Virtual site origin pool remove the Virtual Site label from the centos site. Remove the original Centos Site origin pool form the HTTP LB Delete the Centos Cluster Additional Info: In the above example for the Customer Edge (CE) deployment, we were leveraging the RE's to publish VIPs to the internet and the CE's were used as tunnel termination points as well as SNAT to origin members. If you move the VIP to the CE there are a few caveats with the way to advertise that VIP to the network. For example to leverage all nodes within the cluster, you will need to provide a VIP Advertisement policy that consisted of an out-of-band DNS LB option or nested LB option. Also as mentioned earlier in this article there can also be HA and bandwidth advantages to leveraging virtual sites as depicted below in the last slide. For more info on the migration process or CE design options, reach out to your F5 sales specialist.199Views1like0CommentsSeamless Application Migration to OpenShift Virtualization with F5 Distributed Cloud
As organizations endeavor to modernize their infrastructure, migrating applications to advanced virtualization platforms like Red Hat OpenShift Virtualization becomes a strategic imperative. However, they often encounter challenges such as minimizing downtime, maintaining seamless connectivity, ensuring consistent security, and reducing operational complexity. Addressing these challenges is crucial for a successful migration. This article explores howF5 Distributed Cloud (F5 XC), in collaboration with Red Hat's Migration Toolkit for Virtualization (MTV), provides a robust solution to facilitate a smooth, secure, and efficient migration to OpenShift Virtualization. The Joint Solution: F5 XC CE and Red Hat MTV Building upon our previous work ondeploying F5 Distributed Cloud Customer Edge (XC CE) in Red Hat OpenShift Virtualization, we delve into the next phase of our joint solution with Red Hat. By leveraging F5 XC CE in both VMware and OpenShift environments, alongside Red Hat’s MTV, organizations can achieve a seamless migration of virtual machines (VMs) from VMware NSX to OpenShift Virtualization. This integration not only streamlines the migration process but also ensures continuous application performance and security throughout the transition. Key Components: Red Hat Migration Toolkit for Virtualization (MTV): Facilitates the migration of VMs from VMware NSX to OpenShift Virtualization, an add-on to OpenShift Container Platform F5 Distributed Cloud Customer Edge (XC CE) in VMware: Manages and secures application traffic within the existing VMware NSX environment. F5 XC CE in OpenShift: Ensures consistent load balancing and security in the new OpenShift Virtualization environment. Demonstration Architecture To illustrate the effectiveness of this joint solution, let’s delve into the Demo Architecture employed in our demo: The architecture leverages F5 XC CE in both environments to provide a unified and secure load balancing mechanism. Red Hat MTV acts as the migration engine, seamlessly transferring VMs while F5 XC CE manages traffic distribution to ensure zero downtime and maintain application availability and security. Benefits of the Joint Solution 1. Seamless Migration: Minimal Downtime: The phased migration approach ensures that applications remain available to users throughout the process. IP Preservation: Maintaining the same IP addresses reduces the complexity of network reconfiguration and minimizes potential disruptions. 2. Enhanced Security: Consistent Policies: Security measures such as Web Application Firewalls (WAF), bot detection, and DoS protection are maintained across both environments. Centralized Management: F5 XC CE provides a unified interface for managing security policies, ensuring robust protection during and after migration. 3. Operational Efficiency: Unified Platform: Consolidating legacy and cloud-native workloads onto OpenShift Virtualization simplifies management and enhances operational workflows. Scalability: Leveraging Kubernetes and OpenShift’s orchestration capabilities allows for greater scalability and flexibility in application deployment. 4. Improved User Experience: Continuous Availability: Users experience uninterrupted access to applications, unaware of the backend migration activities. Performance Optimization: Intelligent load balancing ensures optimal application performance by efficiently distributing traffic across environments. Watch the Demo Video To see this joint solution in action, watch our detailed demo video on the F5 DevCentral YouTube channel. The video walks you through the migration process, showcasing how F5 XC CE and Red Hat MTV work together to facilitate a smooth and secure transition from VMware NSX to OpenShift Virtualization. Conclusion Migrating virtual machines (VMs) from VMware NSX to OpenShift Virtualization is a significant step towards modernizing your infrastructure. With the combined capabilities of F5 Distributed Cloud Customer Edge and Red Hat’s Migration Toolkit for Virtualization, organizations can achieve this migration with confidence, ensuring minimal disruption, enhanced security, and improved operational efficiency. Related Articles: Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization136Views1like0CommentsSeamless App Connectivity with F5 and Nutanix Cloud Services
Modern enterprise networks face the challenge of connecting diverse cloud services and partner ecosystems, expanding routing domains and distributed application connections. As network complexity grows, the need to onboard applications quickly become paramount despite intricate networking operations. Automation tools like Ansible help manage network and security devices. But it is still hard to connect distributed applications across new systems like clouds and Kubernetes. This article explores F5 Distributed Cloud Services (XC), which facilitates intent-based application-to-application connectivity across varied infrastructures. Expanding and increasingly sophisticated enterprise networks Traditional data center networks typically rely on VLAN-isolated architectures with firewall ACLs for intra-network security. With digital transformation and cloud migrations, new systems work with existing networks. They move from VLANs to labels and namespaces like Kubernetes and Red Hat OpenShift. Future trends point to increased edge computing adoption, further complicating network infrastructures. Managing these complexities raises costs due to design, implementation, and testing impacts of network changes. Network infrastructure is becoming more complex due to factors such as increased NAT settings, routing adjustments, added ACLs, and the need to remove duplicate IP addresses when integrating new and existing networks. This growing complexity drives up the costs of design, implementation, and testing, amplifying the impact of network changes A typical network uses firewalls to control access between VLANs or isolated networks based on security policies. Access lists (ACLs) are organized by source and destination applications or clients. The order of ACLs is important because shorter prefixes can override longer ones. In some cases, ACLs exceed 10,000 lines, making it difficult to identify which rules impact application connectivity. As a result, ACLs for discontinued applications often remain in the firewall configuration, posing potential security risks. However, refactoring ACLs requires significant effort and testing. F5 XC addresses the ACL issue by managing all application connectivity across on-premises, data centers, and clouds. Its load balancer connects applications through VLANs, VPCs, and other networks. Each load balancer has its own application control and associated ACLs, ensuring changes to one load balancer don’t affect others. When an application is discontinued, its load balancer is deleted, preventing leftover configurations and reducing security risks. SNAT/DNAT is commonly used to resolve IP conflicts, but it reduces observability since each NAT table between the client and application must be referenced. ACL implementations vary across router/firewall vendors—some apply filters before NAT, others after. Logs may follow this behavior, making IP address normalization difficult. Pre-NAT IP addresses must be managed across the entire network, adding complexity and increasing costs. F5 XC simplifies distributed load balancing by terminating client sessions and forwarding them to endpoints. Clients and endpoints only need IP reachability to the CE interface. The load balancer can use a VIP for client access, but global management isn't required, as the client-side network sets it. It provides clear source/destination data, making application access easy to track without managing NAT IP addresses. F5 Distributed Cloud MCN/Network as a Service F5XC offers an all-in-one software package for networking and security, managed through a single console with intent-based configuration. This eliminates the need for separate appliance setups. Furthermore, firewall, load balancer, WAF changes, and function updates are done via SDN without causing downtime. The F5 XC Customer Edge (CE) works across physical appliances, VMs, containers, and clouds, creating overlay tunnels between CEs. Acting as an application gateway, it isolatesrouting domains and terminates L3 routing, while maintaining best practices for cloud, Kubernetes, and on-prem networks. This simplifies physical network configurations, speeding up provisioning and reducing costs. How CE works in enterprise network This design is straightforward: CEs are deployed in the cloud and the user’s data center. In this example, the data center CE connects four networks: Underlay network for CE-to-CE connectivity Cloud application network (VPC/VNET) Branch office network Data center application network The data center CE connects the branch office and application networks to their respective VRFs, while the cloud CE links the cloud network to a VRF. The application runs on VLAN 100 (192.168.1.1) in the data center, while clients are in the branch office. The load balancer configuration is as follows: Load balancer: Branch office VRF Domain: ent-app1.com Endpoint (Origin pool): Onpre-VLAN100 VRF, 192.168.1.1 Clients access "ent-app1.com," which resolves to a VIP or CE interface address. The CE identifies the application’s location, and since the endpoint is in the same CE within the Onpre-VLAN100 VRF, it sets the source IP as the CE and forwards traffic to 192.168.1.1. Next, the application is migrated to the cloud. The CE facilitates this transition from on-premises to the cloud by specifying the endpoint. The load balancer configuration is as follows: Load Balancer: Branch Office VRF Domain: ent-app1.com Endpoint (Origin Pool): AWS-VPC VRF, 10.0.0.1 When traffic arrives at the CE from a client in the branch office, the CE in the data center forwards the traffic to the CE on AWS through an overlay tunnel. The only configuration change required is updating the endpoint via the console. There is no need to modify the firewall, switch, or router in the underlay network. Kubernetes networking is unique because it abstracts IP addresses. Applications use FQDNs instead of IP addresses and are isolated by namespace or label. When accessing the external network, Kubernetes uses SNAT to map the IP address to that of the Kubernetes node, masking the source of the traffic. CE can run as a pod within a Kubernetes namespace. Combining a CE on Kubernetes with a CE on-premises offers flexible external access to Kubernetes applications without complex configurations like Multus. F5XC simplifies connectivity by providing a common configuration for linking Kubernetes to on-premises and cloud environments via a load balancer. The load balancer configuration is as follows: Load Balancer: Kubernetes Domain: ent-app1.com Endpoint (Origin Pool): Onprem-vlan100 VRF, 192.168.1.1 / AWS-VPC VRF, 10.0.0.1 * Related blog https://community.f5.com/kb/technicalarticles/multi-cluster-multi-cloud-networking-for-k8s-with-f5-distributed-cloud-%E2%80%93-archite/307125 Load balancer observability The Load Balancer in F5XC offers advanced observability features, including end-to-end network latency, L7 request logs, and API visualization. It automatically aggregates data from various logs and metrics, (see below). You can also generate API graphs from the aggregated data. This allows users to identify which APIs are in use and create policies to block unnecessary ones. Demo movie The video demonstration shows how F5 XC connects applications across different environments, including VM, Kubernetes, on-premises, and cloud. CE delivers both L3 and L7 connectivity across these environments. Conclusion F5 Distributed Cloud Services (F5XC) simplifies enterprise networking by integrating applications seamlessly across VMs, Kubernetes, on-premises, and cloud environments. By minimizing physical network modifications and leveraging best practices from diverse infrastructure systems, F5XC enables efficient, scalable, and secure application connectivity. Key Benefits: - Simplified physical network design - Network as a Service for seamless application communication and visibility - Integration with new networking paradigms like Kubernetes and SASE Looking Ahead As new networking systems emerge, F5XC remains adaptable, leveraging APIs for self-service network configurations and enabling future-ready enterprise networks.120Views1like0Comments