on 25-Sep-2022 18:00
Kubernetes has a simplified networking model which was designed for general IT workloads which only use TCP/HTTP protocols, and a simplified networking model with a single IP address per POD (the smallest deployable unit of computing in Kubernetes) and a single external gateway. On the other hand, Telco deployments require:
Hopefully, Kubernetes has been designed to be extensible yet it's up to the software and infrastructure architects to design solutions with good practices by following Kubernetes patterns.
It´s usual that NF vendors use additional interfaces (multus CNI) for each NF PODs. By doing this, there is no dynamic advertising of addresses as the PODs in the deployment change or a good way to track these addresses outside the cluster by network elements such as firewalls.
Alternatively, NF vendors try to hide this complexity by selling turn-key solutions with dedicated Kubernetes clusters for each NF or vendor. Therefore the customer ends up with multiple clusters typically disimilar, ultimately defeating the whole purpose of Kubernetes which aims to homogenize application's environment in a single platform. In a way, this this approach is equivalent to having a load balancer for each application. Also, the management of addresses by external network elements is still inaddecuate because identification of NFs is done in a coarse manner by identifying clusters' addresses.
These two approaches break Kuberentes patterns by adding complexity in the form of non-homogenous networking to the different NFs.
In this post we introduce F5 BIG-IP Next Service Proxy Kubernetes (SPK) -- BIG-IP Next SPK for short -- architecture which overcomes these limitations while being Network Functions-agnostic. We will use Red Hat's Openshift as reference platform.
BIG-IP Next SPK is a cloud-native solution which runs inside the Kubernetes cluster and is made out of independent components which can be scaled-out. It's headless software (no graphical UI) and it's managed using the Kubernetes API. The major software components are shown next.
BIG-IP Next SPK's data plane makes use of the widely trusted BIG-IP’s Traffic Management Microkernel (TMM) data plane. This allows for a high performance, dependable product from the start. A dynamic routing component configures the BGP peering with the upstream routers for ECMP load distribution. The BFD feature for fast failure detection is available. The session persistence is a distributed database which allows to store connection related state such as pool member persistence, SNATs, NAT46 translations, etc... This database is backed in Kubernetes Persistent Volume which allows this information to be available even after POD restarts. The controller is the component which interacts with the Kubernetes API that customers use to configure BIG-IP Next SPK. Fluentd is a high-performance industry standard for exposing BIG-IP Next SPK metrics and logs to external tools.
The overall network architecture is shown next. Out of this picture we would like to emphasise the following items:
A more detailed view of the network path is shown next. From this diagram we want to emphasise:
As depicted above, BIG-IP Next SPK has two types of interfaces: external facing the upstream routers and internal facing the Kubernetes networking.
Openshift's networking facilitates that using BIG-IP Next SPK in a cluster is optional in a per-namespace basis and this is done transparently to the applications. No change or configuration needs to be done in the applications.
Openshift uses for its networking the OVNKubernetes CNI. It can be seen in the picture that the applications continue using the OVNKubernetes router as default and only gateway.
Finally we will show a L2 view of the networks in a cluster with BIG-IP Next SPK. From this diagram we want to emphasise:
In order to use BIG-IP Next SPK no changes need to be done in the applications or in the namespace hosting the applications. At BIG-IP Next SPK configuration time we will instruct which namespace we want BIG-IP Next SPK to handle and voilà: BIG-IP Next SPK becomes the next-hop of OVNKubernetes router for that namespace. No labels or other artifacts need to be configured manually.
Defining BIG-IP Next SPK services configurations is done thorugh the Kubernetes API using Custom Resource Definitions (CRDs). At time of this writting the following resources are avaiilable:
Besides the above resource list we would like to highlight 3 functionalities that should not be overlooked:
BIG-IP Next SPK fully supports IPv4/IPv6 dual-stack networking as implemented in Kubernetes v1.21 or later. BIG-IP Next SPK’s DNS46/NAT46 feature, however, does not rely on Kubernetes IPv4/IPv6 dual-stack and therefore, it can be used with earlier versions of Kubernetes.
The adoption of IPv6 in new 5G deployments has created a need to interact with older IPv4 single stack components and services. BIG-IP Next SPK’s DNS46/NAT46 provides this interoperability, easing the adoption and transition between IPv4 and IPv6 services. This solution allows IPv4 applications to access any IPv6 application on demand, without requiring reconfiguration.
In the upcoming releases BIG-IP Next SPK will continue expanding its traffic management capabilities by exposing more TMM capabilities through the Kubernetes API, noticiably HTTP/2.
Also, BIG-IP Next SPK will be gaining more security oriented features. At present it is being targeted the following features:
These security features are specially useful because BIG-IP Next SPK constitutes a security boundary with respect of all the workloads in the cluster and kubernetes itself (CNI, API, basic node management). Although BIG-IP Next SPK is running inside the Kubernetes cluster, it is the only software that manages the external network interfaces at L3. This is depicted in the next figure.
These security features have been available for long time in BIG-IP products and at present we are capturing customers´ input to design the best APIs possible for exposing these functionalities following Kubernetes patterns.
This article introduces a scalable and dependable high performance gateway solution that delivers the granular ingress and egress controls in Kubernetes-based deployments that Telcos need. It builds on the unique potential of OpenShift external gateways by making full use of OpenShift capabilities—an industry first. Use cases that particularly benefit include 5GC and MEC. Plus, the BIG-IP Next SPK solution can dynamically translate IPv4 to IPv6 network addresses, which solves the problem of mixed IPv4 and IPv6 deployments. The result is a gateway solution flexible enough to adapt to new and evolving Telco needs while offering interoperability with pre-5G services.
For additional information please check the RedHat & F5 co-written white paper F5 Telco Gateway for Red Hat OpenShift and the official BIG-IP Next SPK documentation.