From virtual to cloud-native, infrastructure evolution

 

Introduction 

The course of industry evolution shows how infrastructure optimization clears the path for further innovation. 

It started with physical devices being chained to deliver specific services. Then we moved to virtual functions, and now we are exploring cloud-native functions. With each evolution, we optimize space, power, speed, and simplicity. This enables our applications to go beyond existing limitations and optimize our infrastructure’s processing power to what’s really important, enhancing our customer experience. 

The telecom industry picked up on this evolution first as such great optimization allows for further enhancements to services provided to end users. Major enterprises and currently startups started to follow along, witnessing the significant impact on infrastructure spending. 

Looking at how each evolution step enables for further optimization, we can find some figures below showing how the transition from Physical to Virtual and later to Cloud-native shows significant levels of optimization. 

 

TransitionCompute (CPU)Memory (RAM)Power & SpaceDeployment TimeO&M (Ops & Mgmt) Effort
PNF ➝ VNF~30–40%~25–35%~50–70%Reduced from weeks to daysModerate savings via automation
VNF ➝ CNF~40–60%~30–50%~20–30% additionalReduced from days to minutesSignificant: GitOps, CI/CD, self-healing

 

Exploring the above table, we can highlight the below-saving items, 

  1. CPU Efficiency:
    • VNFs eliminate hardware dependencies but often carry VM hypervisor overhead.
    • CNFs, running in lightweight containers, reduce virtualization overhead and allow finer-grained scaling (e.g., per microservice), improving CPU utilization by up to 60% compared to VNFs.
  2. Memory Footprint:
    • CNFs use shared memory models and service-mesh patterns more efficiently than monolithic VNFs, often cutting memory usage by 30–50%.
  3. Power & Rack Space:
    • PNFs consume the most power and space due to dedicated appliances.
    • VNFs allow consolidation; CNFs, by improving container density, can shrink power and space use further by an additional 20–30%.
  4. Deployment Time & Agility:
    • PNFs take weeks to provision.
    • VNFs reduce that to days, but CNFs can be deployed or updated in minutes, often automatically via CI/CD pipelines.
  5. Operational Efficiency:
    • CNFs support auto-scaling, automated healing, and GitOps, leading to over 70% reduction in manual operations compared to PNF-era systems.


BIG-IP Next Cloud Native Functions 

The CNF implementation leverages a disaggregation (DAG) layer that decouples control plane logic from data plane processing. This separation enables dynamic traffic steering across CNF pods and optimizes resource utilization through intelligent workload distribution. The architecture supports horizontal scaling patterns typical of cloud-native applications while maintaining the deterministic performance characteristics required for telecommunications workloads.

Telecom service providers make use of CNFs performance optimization, 

  • Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks.

  • Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance.

  • Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases.

Integration with Kubernetes occurs through custom resource definitions (CRDs) that extend the native API, allowing network functions to be managed as first-class Kubernetes resources. F5-provided Helm charts facilitate deployment automation and lifecycle management, enabling infrastructure-as-code practices for network service provisioning.

  • BIG-IP Next Edge Firewall CNF

    • Firewall, DDoS, and Intrusion Prevention System (IPS) technology based on F5’s highly successful BIG-IP Advanced Firewall Manager (AFM).

  • BIG-IP Next DNS CNF

    • Enable DNS caching and reduce DNS latency up to 80%.  DoH decrypts and resolves DNS queries over HTTPS without impacting RPS.

  • BIG-IP Next CGNAT CNF

    • Ease IPv6 migration and improve network scalability and security with IPv4 address management.

  • BIG-IP Next Policy Enforcer CNF

    • Improve QoE and ARPU with tools like traffic classification and subscriber awareness.

  • BIG-IP Next Disaggregation (DAG) CNF

    • Efficiently route traffic across CNF pods with dynamic steering, ensuring better resource utilization and high performance.

       

F5 BIG-IP Next for Kubernetes CNFs 

In this section, we explore the main steps to deploy CNFs via Kubernetes cluster, 

 

  • Kubernetes cluster 
  • CNF Pre-Setup 
    • Local Repository
    • NFS storage
    • CNF namespace 
    • Install F5 CRDs 
    • Cert Manager 
    • CRD Conversion
  • Containers to be used 
    • Rabbit MQ
    • dSSM
    • Fluentd
    • CWC deployment
  • CNFs
    • F5Ingress along with enabled features. 

You can go through our Walk through through this link: BIG-IP Next for Kubernetes CNF walkthrough

 

 

Related Content 

 

Published Jul 14, 2025
Version 1.0
No CommentsBe the first to comment