Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization

Introduction

Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers.

Why OpenShift Virtualization?

Organizations today face critical needs such as:

  • Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions.
  • Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures.
  • Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization.

OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services.

Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits:

  • Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment.
  • Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization.
  • Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats.
  • Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability.

This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required.

Technical Overview

Deploying XC CE within OpenShift Virtualization involves several key technical steps:

  1. Preparation
    • Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed.
    • Access Rights: Confirm administrative permissions to configure compute and network settings.
    • F5 XC Account: Obtain access to generate node tokens and download the XC CE images.
  2. Resource Optimization
    • Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively.
    • Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance.
  3. Network Configuration:
    • Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines.
    • NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity.
  4. Image Preparation:
    • Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt.
    • Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration.
    • User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process.
  5. Deployment:
    • Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI).
    • Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift.

Network Configuration

Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines.

Open vSwitch (OVS) Bridges

Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes:

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: ovs-vms
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ''
  desiredState:
    interfaces:
    - name: ovs-vms
      type: ovs-bridge
      state: up
      bridge:
        allow-extra-patch-ports: true
        options:
          stp: true
        port:
          - name: eno1
    ovn:
      bridge-mappings:
      - localnet: ce2-slo
        bridge: ovs-vms
        state: present
  • Replace eno1 with the appropriate physical network interface on your nodes.
  • This policy sets up an OVS bridge named ovs-vms connected to the physical interface. 

NetworkAttachmentDefinitions (NADs)

Define NADs using Multus CNI to attach networks to the virtual machines.

External Network (ce2-slo):

  • External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. 
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: ce2-slo
  namespace: f5-ce
spec:
  config: |
    {
      "cniVersion": "0.4.0",
      "name": "ce2-slo",
      "type": "ovn-k8s-cni-overlay",
      "topology": "localnet",
      "netAttachDefName": "f5-ce/ce2-slo",
      "mtu": 1500,
      "vlanID": 3052,
      "ipam": {}
    }

Internal Network (ce2-sli):

  • Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. 
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: ce2-sli
  namespace: f5-ce
spec:
  config: |
    {
      "cniVersion": "0.4.0",
      "name": "ce2-sli",
      "type": "ovn-k8s-cni-overlay",
      "topology": "layer2",
      "netAttachDefName": "f5-ce/ce2-sli",
      "mtu": 1400,
      "ipam": {}
    }

 

VirtualMachine Configuration

Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource.

Image Preparation

  • Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console.
  • Generate Node Token: Acquire a one-time node token for node registration.

Cloud-Init User Data

Create a user-data configuration containing the node token and network settings:

#cloud-config 
write_files: 
  - path: /etc/vpm/user_data 
    content: | 
      token: <your-node-token> 
      slo_ip: <IP>/<prefix> 
      slo_gateway: <Gateway IP> 
      slo_dns: <DNS IP> 
    owner: root 
    permissions: '0644'
  • Replace placeholders with actual network configurations.
  • This file automates the VM's initial setup and registration.

VirtualMachine Resource Definition

Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations.

  • Resources: Allocate sufficient CPU and memory.
  • Disks: Reference the DataVolume containing the XC CE image.
  • Interfaces: Attach NADs for network connectivity.
  • Cloud-Init: Embed the user data for automatic configuration.

Conclusion

Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications.

For detailed deployment instructions and configuration examples, please refer to the attached PDF guide.

 

Related Articles:

Updated Oct 21, 2024
Version 3.0
No CommentsBe the first to comment