F5 BIG-IP deployment with OpenShift - platform and networking options
Introduction
This article is an architectural overview on how F5 BIG-IP can be used with Red Hat OpenShift. Several topics are covered, including:
- 1-tier or 2-tier arrangements, where the BIG-IP load balance workload PODs directly or load balance ingress controllers (such as NGINX+ or OpenShift's built-in router) respectively.
- Multi-cluster arrangements, where the BIG-IP can load-balance, or do route sharding across two or more clusters.
- multi-tenancy, and IP address management options.
While this article has a NetOps/infrastructure focus, the follow-up article BIG-IP deployment with OpenShift—application publishing focuses in DevOps/applications.
Overall architecture
When using BIG-IP with Red Hat OpenShift, the container Container Ingress Services (CIS from now on) container is used to connect the BIG-IP APIs with the Kubernetes APIs. The source of truth is OpenShift. When a user configuration is applied or when a change occurs in the OpenShift cluster, then CIS automatically updates the configuration in the BIG-IP.
Under the hood, CIS updates the BIG-IP configuration using the AS3 declarative API. It is not necessary to know if this applies, as all the configuration can be applied using Kubernetes resource types.
IP Address Management (IPAM from now on) is important when it is desired that the DevOps teams operate independently from the infrastructure administrators. CIS supports IPAM by making use of the F5 IPAM Controller (FIC from now on), which is deployed as a container as well.
It can be seen how these components fit together in the next picture. CIS and FIC are PODs deployed in the OpenShift cluster and AS3 is deployed in the BIG-IP.
In the next sections, we cover the different deployment options and considerations to be taken into account. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you will find additional technical details and examples.
Networking - CNI options
Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and F5 BIG-IP supports all Openshift's native CNIs:
- OVNKubernetes - This is the preferred option. GA since Openshift 4.6, makes use of Geneve encapsulation, but BIG-IP interacts with this CNI in a routed mode in which the packets from/to the BIG-IP don't use encapsulation. Additionally, POD's cluster IPs are discovered dynamically by CIS when OpenShift nodes are added or removed. This latter makes this method also the easiest from BIG-IP management point of view. Check CIS configuration for OVNKubernetes for details.
- OpenshiftSDN - supported since Openshift 3.x, it is being phased out in favour of OVNKubernetes. It makes use of VXLAN encapsulation between the nodes and between the nodes and the BIG-IPs. This requires manual configuration of VXLAN tunnels in the BIG-IPs when OpenShift nodes are added or removed. Check CIS configuration for OpenShiftSDN for details.
Feature-wise these CNIs we can compare them from the next table from the Openshift documentation.
Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading to reduce the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs; this is suggested in OpenShift's documentation as well, and needs to be set at installation time in the install-config.yaml file. See this OpenShift.com link for details.
Networking - the importance of supporting clusters' CNI
There are basically two modes to interact with a Kubernetes workload from outside the cluster:
- Using NodePort Service type. In this case, external hosts access the PODs using any of the cluster's nodes IPs. When a request reaches a node, Kubernetes' kube-proxy is reponsible for forwarding the request to a POD in the local or remote node. When sending to a remote node, it adds noticeable overhead. In two-tier deployments externalTrafficPolicy: local and could be used with appropriate monitoring to avoid this additional hop.
NodePort is popular for other external Load Balancers because it is an easy method to access the PODs without having to support the CNI, as the name indicates by using Kubernete's nodes. IP address. This has the drawback of an additional indirection. This drawback is specially relevant for 1-tier deployments because application PODs cannot be accessed directly, eliminating the advantages of this deployment type.
On the other hand, BIG-IP supports OpenShift CNI's, both OpenShiftSDN and OVNKubernetes. - Using LoadBalancer Service type. The packet path in this mode is equivalent to NodePort, in which the external load balancers need an intermediate kube-proxy hop before reaching the POD.
An alternative to bypassing kube-proxy is the use of hostNetwork access, but this is discouraged in general because of its security implications. - Using ClusterIP Service type. This is the preferred mode because when sending a request, this is sent directly to the destination POD. This requires to support OpenShfit's CNIs, which is the case of BIG-IP. It is worth noting that BIG-IP also supports other CNIs such as Calico or Cilium. This arrangement can be seen next.
Please note in the above figure the traffic path from the BIG-IP, where the arrow reaches the inside of the CNI area. This is to indicate that it can address the ingress controllers or the workload POD's IPs within the cluster network.
Using this Service type Cluster IP is also more flexible because it allows CIS to use 1-tier and 2-tier arrangements simultaneously.
Networking - Load Balancer arrangement options
There are basically two arrangement options, 1 and 2 tier. In a nutshell:
- A 2-tier arrangement is the typical way in which Kubernetes clusters are deployed. In this arrangement, the BIG-IP has only the role of External Load Balancer (first tier only) and sends the client requests to the Ingress Controller Instances (second tier). The Ingress Controllers ultimately forward the requests to the workload PODs.
- In a 1-tier arrangement, the BIG-IP sends the requests to the workload PODs directly. This is a much simplified arrangement, in which the BIG-IP performs the role of both External Load Balancer and Ingress Controller.
Next, we will see the advantages of each arrangement. Please note that when using ClusterIP, this selection can be done on a per-Service basis. From BIG-IP point of view, it is irrelevant what are the endpoints.
Load Balancer arrangement option - 2-tier arrangement
Unlike most External Load Balancers, the BIG-IP can expose services with either Layer 4 functionalities or Layer 7 functionalities. In Layer 7 mode, SSL/TLS off-loading, HSM, Advanced WAF, and other advanced services can be used.
A tier-2 arrangement provides greater scalability compared to 1-tier arrangements in terms of number of L7 routes exposed or number Kubernetes PODs because the control plane workload (the related Kubernetes events that are generated for these PODs and Routes) is split between BIG-IP/CIS and the in-cluster Ingress Controller.
This arrangement also has strong isolation between the two tiers, ideal when each tier is managed by different teams (i.e.: platform and developer teams).
A BIG-IP 2-tier arrangement is shown next:
Load Balancer arrangement option - 1-tier arrangement
In this arrangement, the BIG-IP typically operates in L7 mode and sends the traffic directly to the final workload POD. This is done by sending traffic to Services in ClusterIP mode. In this arrangement, persistence is handled easily and the worker's PODs can be directly monitored by the BIG-IP, providing an accurate view of the application's health.
A BIG-IP 1-tierrangement is shown next:
This arrangement is simpler to troubleshoot, has less latency and potentially higher per-session performance. An isolation between platform and developer teams can be achieved with CIS and FIC, yet this is not as strong isolated compared to 2-tier arrangements. This is described in BIG-IP deployment with OpenShift — application publishing options.
BIG-IP platform flexibility: deployment, scalability, and multi-tenancy options
Using BIG-IP, the deployment options are independent of the BIG-IP being an appliance, a scale-out chassis, or a Virtual Edition. The configuration is always the same down to the L2 (vlan/tunnel) config level. Only the L1 (physical interface) configuration changes.
This platform flexibility also opens the possibilities of using different options for scalability, multi-tenancy, hardware accelerators, or Hardware Security Modules (HSMs). These latter are specially important to keep the SSL/TLS private keys in an FIPS compliant manner. The HSMs can be onboard, on-prem Network HSMs, or cloud SaaS HSMs.
Multi-tenancy Options
In this section, multi-tenancy refers to the case in which different projects from one or more OpenShift clusters are serviced by a single BIG-IP. Next, it is outlined the different CIS deployment options:
- A CIS instance can manage all namespaces on a given OpenShift cluster or a subset of these. Namespaces can be specified with a list or a label selector (i.e.: envionment=test or environment=production).
- Multiple CIS instances, handling different namespaces, can share a single or different BIG-IPs. Each CIS instance will own a dedicated partition in a BIG-IP. For example, it is feasible to setup an OpenShift cluster with devevelopment, pre-production, and production labeled namespaces and these be serviced by different CIS instances in the same or different BIG-IPs for each environment.
- Multiple CIS instances in a single BIG-IP can also handle different OpenShift clusters. This is thanks to the soft isolation provided by BIG-IP partitions. Network isolation between these partitions can be achieved with routed domains.
Some of these deployment options are shown next:
IP address management (IPAM)
CIS has the capability of dynamically allocating IP addresses using the F5 IPAM Controller (FIC) companion. At the time of writing, it is possible to retrieve IP addresses from the following providers:
- Infoblox
- F5 local DB provider, which makes use of a PVC for persistence.
For the DevOps team, it is transparent which provider is used; it is only required to specify an ipamLabel attribute in the exposed L7 or L4 service. The DevOps team can also have the ability of indicating when it wants to share IP addresses between different L7 or L4 services by means of the HosGroup attribute. This is described in the follow-up article.
BIG-IP data plane scalability options
A single BIG-IP cluster can scale up horizontally with up to 8 BIG-IP instances and have the different projects distributed in these. This is referred to as Scale-N in the BIG-IP documentation. This mode is often not used because it requires additional orchestration or manual operation for optimal load distribution. In this mode, projects would have soft-isolation between projects by means of BIG-IP partitions.
When ultimate scalability or hard isolation is required, then TMOS vCMP technology or in newer versions F5OS tenants facilities can be used in larger appliances and scale-out chassis. These multi-tenant facilities allow running independent BIG-IP instances, isolated at hardware level, even allowing using different versions of BIG-IP. The tenant BIG-IP instances can get allocated different amounts of hardware resources. In the next picture, the different tenants are shown in different colored bars using several blades (grey bars).
Using chassis-based platforms allows to scale data plane performance and increase redundancy by adding blades to the systems without the need of a reconfiguration in the CIS/OpenShift side of things.
BIG-IP control plane scalability options
When using very large OpenShfit clusters with either a large number of services exposed or a large number of Pods and there is a high number of changes, these will trigger many events in the Kubernetes API. These events are processed by CIS and ultimately in the BIG-IP's control plane. In these cases, the following strategies can be used to improve BIG-IP's control plane scalability:
- Dissagregate the different projects in different BIG-IPs. These might be multiple BIG-IP VEs or instances in F5 vCMP or F5OS tenants when using hardware platforms.
- Use a 2-tier architecture, which reduces the number of Kubernetes objects and events that the BIG-IP is exposed to.
- In the upcoming months, CIS will be available in BIG-IP Next. This is a re-architecture of BIG-IP and incorporates major scalability improvements in the control plane.
Multi-cluster OpenShift
Since CIS version 2.14 it is also possible that BIG-IP load balances between 2 or more clusters in Active-Active, Active-Standby, or Ratio modes. 1-tier or 2-tier arrangements are possible.
Next, it shows a single BIG-IP exposing workloads from 2 OpenShift clusters.
Please note that OpenShift clusters don't require to be running with the same version, so this arrangement is also interesting for performing OpenShift upgrades.
When using CIS in multi-cluster mode, an additional CIS instance in a secondary cluster is needed for redundancy. If there are more than 2 OpenShift clusters, no additional CIS instances are needed. Therefore, a typical BIG-IP cluster of 2 units load balancing 2 or more OpenShift clusters will always require 4 CIS instances.
For each BIG-IP, one of the CIS instances has the (P)rimary role and is in charge of making changes in the BIG-IP by default. The (S)econdary CIS will be on standby. Both CIS instances access all OpenShift clusters. A more comprehensive view of this can be seen in the next diagram, which considers having more than 2 OpenShift clusters. OpenShift clusters that don't host a CIS instance are referred to as remotely managed.
Conclusion
F5 BIG-IPs provides unmatched deployment options and features with Openshift; these include:
- The support of OpenShift's CNIs which allows sending the traffic directly instead of using hostNetwork (which implies a security risk) or using the common NodePort which incurs the additional kube-proxy indirection.
- Both 1-tier or 2-tier arrangements (or both types simultaneously) are possible.
- F5´s Container Ingress Services provides the ability to handle multiple OpenShift clusters, exposing its services in a single VIP. This is a unique feature in the industry.
- To complete the circle, this integration also provides IP address management (IPAM) which provides great flexibility to DevOps teams.
All these are available regardless. The BIG-IP is a Virtual Edition, an appliance or a chassis platform allowing great scalability and multi-tenancy options.
The follow-up article BIG-IP deployment with OpenShift—application publishing focuses on DevOps and applications. In this, it is described how CIS can also unleash all traffic management and security features in a Kubernetes native way.
We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.
- FVEmployee
In this article should the following phrase:
"NodePort is popular for other external Load Balancers because it requires these to support the CNI in order to access a POD."
not be replaced by:
"NodePort is popular for other external Load Balancers because it does not require these to support the CNI in order to access a POD."
Thanks
- Ulises_AlonsoEmployee
Thanks for the errata, fixing it
- rgfaridNimbostratus
Dear All,
I check the channel of the CIS operator in openshift and it is in beta channel.
Do you know when this operator will be delivered in stable channel?
We can only deploy in production environment operator that are delivered in stable channel.
Thanks for your reply
- MichaelOLearyEmployee
Hi rgfarid
CIS operator should be available for production use cases also. I've deployed CIS (both using YAML manifests and using the Openshift Operator) many times. Can you shoot me a message (you can use this DevCentral platform) if you need help with this?
Mike.
- rgfaridNimbostratus
Hi MichaelOLeary ,
Thanks for your reply.
My question is related to the default operator channel when trying to install the operator from the openshift redhat catalog.
When we list the catalog, under certified-operator-index, we get this output:
NAME DISPLAY NAME DEFAULT CHANNEL
f5-bigip-ctlr-operator F5 Container Ingress Services beta
We have a requirement for our production environment, the operator must be in stable channel.
We have check with redhat and they have reply to check with the ISV, that is to say F5 support.
Thanks for your help
- Trinath_SomanchEmployee
For CIS Operator see https://catalog.redhat.com/software/container-stacks/detail/5e98727cd4ae96b7493f08ce