BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads

Kubernetes has a simplified networking model which was designed for general IT workloads which only use TCP/HTTP protocols, and a simplified networking model with a single IP address per POD (the smallest deployable unit of computing in Kubernetes) and a single external gateway. On the other hand, Telco deployments require:

  • 3GPP protocols support.
  • Transitional 4G to 5G facilities.
  • Network capabilities to match Telco's networks by providing multiple external network connectivity, allowing different paths depending on the Network Function (NF), dynamic routing, etc...


Hopefully, Kubernetes has been designed to be extensible yet it's up to the software and infrastructure architects to design solutions with good practices by following Kubernetes patterns.

It´s usual that NF vendors use additional interfaces (multus CNI) for each NF PODs. By doing this, there is no dynamic advertising of addresses as the PODs in the deployment change or a good way to track these addresses outside the cluster by network elements such as firewalls.

Alternatively, NF vendors try to hide this complexity by selling turn-key solutions with dedicated Kubernetes clusters for each NF or vendor. Therefore the customer ends up with multiple clusters typically disimilar, ultimately defeating the whole purpose of Kubernetes which aims to homogenize application's environment in a single platform. In a way, this this approach is equivalent to having a load balancer for each application. Also, the management of addresses by external network elements is still inaddecuate because identification of NFs is done in a coarse manner by identifying clusters' addresses.

These two approaches break Kuberentes patterns by adding complexity in the form of non-homogenous networking to the different NFs.

In this post we introduce F5 BIG-IP Next Service Proxy Kubernetes (SPK) -- BIG-IP Next SPK for short -- architecture which overcomes these limitations while being Network Functions-agnostic. We will use Red Hat's Openshift as reference platform.

BIG-IP Next SPK software architecture

BIG-IP Next SPK is a cloud-native solution which runs inside the Kubernetes cluster and is made out of independent components which can be scaled-out. It's headless software (no graphical UI) and it's managed using the Kubernetes API. The major software components are shown next.

BIG-IP Next SPK's data plane makes use of the widely trusted BIG-IP’s Traffic Management Microkernel (TMM) data plane. This allows for a high performance, dependable product from the start. A dynamic routing component configures the BGP peering with the upstream routers for ECMP load distribution. The BFD feature for fast failure detection is available. The session persistence is a distributed database which allows to store connection related state such as pool member persistence, SNATs, NAT46 translations, etc... This database is backed in Kubernetes Persistent Volume which allows this information to be available even after POD restarts. The controller is the component which interacts with the Kubernetes API that customers use to configure BIG-IP Next SPK. Fluentd is a high-performance industry standard for exposing BIG-IP Next SPK metrics and logs to external tools.

BIG-IP Next SPK network architecture

The overall network architecture is shown next. Out of this picture we would like to emphasise the following items:

  • Independent BIG-IP Next SPK instances, with completely different external network config, can handle ingress & egress traffic for each namespace individually.
  • BIG-IP Next SPK is highly scalable at POD level (1-24 cores) and at cluster level, limited by the upstream ECMP capabilities.

A more detailed view of the network path is shown next. From this diagram we want to emphasise:

  • PODs make use of BIG-IP Next SPK transparently by continue using the CNI as usual.
  • BIG-IP Next SPK is a single tier ingress/egress solution not requiring external LB.
  • BIG-IP Next SPK has direct POD IP visibility, there is no kube-proxy or other IP-translating mechanism in between.

As depicted above, BIG-IP Next SPK has two types of interfaces: external facing the upstream routers and internal facing the Kubernetes networking.

Openshift's networking facilitates that using BIG-IP Next SPK in a cluster is optional in a per-namespace basis and this is done transparently to the applications.  No change or configuration needs to be done in the applications.

Openshift uses for its networking the OVNKubernetes CNI. It can be seen in the picture that the applications continue using the OVNKubernetes router as default and only gateway. 

Finally we will show a L2 view of the networks in a cluster with BIG-IP Next SPK. From this diagram we want to emphasise:

  • How regular nodes hosting applications have no modifications either.
  • How BIG-IP Next SPK is typically setup with link aggregations and SR-IOV wire-speed interfaces.
  • How the L3 path between BIG-IP Next SPK and the application's nodes is validated by means of using BFD.

Using BIG-IP Next SPK

In order to use BIG-IP Next SPK no changes need to be done in the applications or in the namespace hosting the applications. At BIG-IP Next SPK configuration time we will instruct which namespace we want BIG-IP Next SPK to handle and voilà: BIG-IP Next SPK becomes the next-hop of OVNKubernetes router for that namespace. No labels or other artifacts need to be configured manually.

Defining BIG-IP Next SPK services configurations is done thorugh the Kubernetes API using Custom Resource Definitions (CRDs). At time of this writting the following resources are avaiilable:

  • F5SPKIngressTCP
    Manages ingress layer 4 TCP application traffic.
  • F5SPKIngressUDP
    Manages ingress layer 4 UDP application traffic.
  • F5SPKIngressDiameter
    Manages Diameter traffic unifying ingress and egress traffic using either TCP or SCTP and keeps sessions persistent using the SESSION-ID attribute value pair (AVP) by default.
  • F5SPKIngressNGAP
    Balances ingress datagram loads for SCTP or NG application protocol (NGAP) signaling.
  • F5SPKEgress
    Enables egress traffic for pods using SNAT or DNS/NAT46. DNS cache and rate limiting parameters can be configured.
  • F5SPKSnatpool
    Allocates IP addresses for egress pod connections.
  • F5SPKDNSCache
    Provides high-performance, transparent DNS resolution and caching for the F5SPKEgress resources.
  • F5SPKPortListandF5SPKAddressList
    Creates sets of ports and addresses, respectively, to make creating and updating services easier.

Besides the above resource list we would like to highlight 3 functionalities that should not be overlooked:

  • IPv6 support

    BIG-IP Next SPK fully supports IPv4/IPv6 dual-stack networking as implemented in Kubernetes v1.21 or later. BIG-IP Next SPK’s DNS46/NAT46 feature, however, does not rely on Kubernetes IPv4/IPv6 dual-stack and therefore, it can be used with earlier versions of Kubernetes.

  • DNS46/NAT46 translation

    The adoption of IPv6 in new 5G deployments has created a need to interact with older IPv4 single stack components and services. BIG-IP Next SPK’s DNS46/NAT46 provides this interoperability, easing the adoption and transition between IPv4 and IPv6 services. This solution allows IPv4 applications to access any IPv6 application on demand, without requiring reconfiguration.

  • Application hairpinning

    The application hairpinning feature is used to differentiate between internal and external clients. A selected set of internal clients accesses an BIG-IP Next SPK Service with the same domain name or IP address as that of another BIG-IP Next SPK Service, which is used by external clients using different configurations. The key difference between the two types of connections is that internal clients are connected using SNAT and external clients are not. This is done by installing two BIG-IP Next SPK CRs of the same type, for example F5SPKIngressTCP, with each CR enabled on a selected VLAN or VLAN list.

´BIG-IP Next SPKs roadmap

In the upcoming releases BIG-IP Next SPK will continue expanding its traffic management capabilities by exposing more TMM capabilities through the Kubernetes API, noticiably HTTP/2.

Also, BIG-IP Next SPK will be gaining more security oriented features. At present it is being targeted the following features:

  • Firewall
  • DDoS protection
  • WAF

These security features are specially useful because BIG-IP Next SPK constitutes a security boundary with respect of all the workloads in the cluster and kubernetes itself (CNI, API, basic node management). Although BIG-IP Next SPK is running inside the Kubernetes cluster, it is the only software that manages the external network interfaces at L3.  This is depicted in the next figure.

 

These security features have been available for long time in BIG-IP products and at present we are capturing customers´ input to design the best APIs possible for exposing these functionalities following Kubernetes patterns.

Conclusion

This article introduces a scalable and dependable high performance gateway solution that delivers the granular ingress and egress controls in Kubernetes-based deployments that Telcos need. It builds on the unique potential of OpenShift external gateways by making full use of OpenShift capabilities—an industry first. Use cases that particularly benefit include 5GC and MEC. Plus, the BIG-IP Next SPK solution can dynamically translate IPv4 to IPv6 network addresses, which solves the problem of mixed IPv4 and IPv6 deployments. The result is a gateway solution flexible enough to adapt to new and evolving Telco needs while offering interoperability with pre-5G services.

For additional information please check the RedHat & F5 co-written white paper F5 Telco Gateway
for Red Hat OpenShift and the official BIG-IP Next SPK documentation.

 

Published Sep 20, 2022
Version 1.0

Was this article helpful?

No CommentsBe the first to comment