on 03-Nov-2023 11:33
This is a follow-up article to F5 BIG-IP deployment with OpenShift - platform and networking options where the multiple deployment options are shown. It is highly recommended to read that article first.
Publishing applications through F5 BIG-IP adds many security and traffic management features. This includes, for example, publishing applications from one or multiple clusters in a single VIP and additionally advertise using dynamic routing or through GSLB (DNS-based load balancing). These features are configured and automated in a Kubernetes native way thanks to the Container Ingress Services (CIS) and the F5 IPAM Controller (FIC) components. These components and the F5 CRDs extensions also enable separation between NetOps and DevOps teams.
A summary of these features is shown in the next figure. Please refer to the official documentation for a detailed explanation on them.
There are many features that are common to both Layer 4 and Layer 7 publishing. The features most usually used are:
Of these, IPAM and GSLB will be covered in dedicated sections.
In this article it will be also given an opinionated overview of the resource types available and which ones are preferred.
This case refers to 1-tier o 2-tier deployments where the BIG-IP operates above the Layer 4, that is: it can operate at SSL/TLS, HTTP or above layers (content) allowing features such as:
The BIG-IP is able to publish at Layer 7 with the Ingress, Route and VirtualServer resource types:
Because of the above, it is suggested to use either the VirtualServer F5 CRD for a fully schema validated API or Routes in NextGen Route mode when additional NetOps/DevOps separation is desired.
By default, OpenShift's router is configured to implement the Ingress and Route resources defined in any namespace. When using an alternative Ingress Controller such as BIG-IP, the OpenShift´s built-in router should be instructed to do not read the manifests intended for the BIG-IP. This is best achieved by the route sharding mechanism as shown in the next OpenShift's IngressController configuration:
namespaceSelector:
matchExpressions:
- key: router
operator: NotIn
values:
- bigip
The above sample configuration will avoid that the OpenShift routers handle any namespace with the label router: bigip in it. This way, OpenShift's system Routes will continue being handled by the built-in OpenShift router. Note that the IngressClass mechanism is not suitable because it would not apply to Route resources.
This route sharding is not necessary when using CIS in F5 CRD mode (--custom-resource-mode option). In this mode ConfigMaps, Routes, and Ingress are not processed by CIS..
This case refers to 1-tier o 2-tier deployments where the workload PODs or Ingress Controllers (respectively) are exposed using a virtual server type which doesn´t have an HTTP profile attached and therefore can only take actions up to the Layer 4 of the traffic.
In addition to the common features available in both Layer 4 and Layer 7 publishing, and depending on the resource types used, the following features are additionally available to Layer 4 resource types:
These are the resource types available:
- Service type LoadBalancer. This exposes the service with a standard (full-proxy) virtual server. This standard Kubernetes mode doesn´t have much built-in functionality and it has been extended by means of annotations which can refer to IPAM (required), health checking or to a traffic Policy CRD.
- IngressLink F5 CRD. This is a resource type specific to 2-tier deployments and used in conjunction with NGINX+. It makes use of the PROXY protocol to expose the client IP address. This resource type has been mainly superseeded by the TransportServer CRD described next.
- TransportServer F5 CRD. This is the recommended resource type, alike the VirtualServer F5 CRD used for Layer 7, all the functionalities are parameters incorporated in the schema of the CRD or are referenced in a companion Policy F5 CRD, therefore it doesn´t require the use of annotations. It is also possible to choose between a standard (full-proxy) or a performance (plain layer 4) virtual server types.
Tier-2 deployments are the canonical model of load balancing in Kubernetes. It allows independence from the infrastructure and has a clear demarcation between NetOps and DevOps. 1-tier deployments can also provide a level of demarcation between of NetOps and DevOps which will be shown in the section after this.
The next figure shows the demarcation between NetOps and DevOps in 2-tier arrangements. It can be seen that the Ingress Controller is implemented inside the cluster. This is usually the OpenShift built-in router but it can be NGINX+ or any other. The only requirement is that the Ingress Controller is exposed by a Service. It is encouraged to use the ClusterIP type to avoid the additional kube-proxy indirection layer of NodePort.
As it can be seen above, using the BIG-IP as External Load Balancer can be configured with Layer 4 services or Layer 7. Using an external load balancer in Layer 7 mode allows to implement services such as SSL/TLS offloading (including the use of an HSM), TCP and HTTP optimizations (connection pooling), Web Application Firewall, Bot protection, content manipulation, etc... in short: any service available in BIG-IP.
An important feature of this arrangement in Layer 7 mode is that the BIG-IP can expose a single VIP and spread the requests according to the route sharding configured, spreading the load across the different ingress controllers. Some possibilities are shown next:
The next considerations apply to any External Load Balancer in a 2-tier arrangement:
In this case, the BIG-IP has the role of both External Load Balancer and Ingress Controller simultaneously and only a single manifest is required to access the application. Persistence and Health check is managed directly by the BIG-IP at workload POD level. An overview of the possible resource types to use is shown in the next figure.
With a 1-tier arrangement there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. Next is proposed a solution for this using the IPAM cotroller. The roles and responsibilities would be as follows:
Additionally, when using NextGen Routes it is possible that NetOps can pre-set and enfroce parameters for the Routes or alternatively trust the DevOps teams allowing DevOps teams to override these. This is described in the Publishing Layer 7 routes in OpenShift section above.
IPAM allows the DevOps teams to allocate new VIPs without requiring NetOps involvement. At present IPAM is supported in the following resource types:
VirtualServer CRD |
TransportServer CRD |
Service type: Loadbalancer |
IngressLink CRD |
This means that at time of this writting IPAM is not supported with Ingress or Route resources (including NextGen Routes extension). Please fill an RFE (Request for Enhancement) in CIS github repository to incorporate this feature.
IPAM currently can retrieve IP addresses from two providers:
Using IPAM is done by specifying two parameters in the manifests:
This process can be seen in the next picture:
This allows for doing DNS-based load balancing of the VIPs that the different resource types expose. This is achieved using the BIG-IP DNS module (formerly known as GTM). It is possible that the BIG-IP DNS modules are in the same BIG-IP where CIS exposes the VIPs or in remote BIG-IPs. The DNS configuration will be written by CIS to a single BIG-IP DNS. After this, the different BIG-IP DNS instances automatically synchronize the GSLB configuration and updates.
In order to publish a service in GSLB, DevOps only has to create an EDNS F5 CRD and match its domain name parameter with the hostname specified with any of the resource types supported:
F5 BIG-IPs exposes applications with over the top advanced traffic management and security features in a Kubernetes native way.
Using F5 BIG-IP in Layer 7 mode brings all these features and allow for use cases like route sharding in a single VIP.
In this article it is recommended the use of F5's CRDs or the use of NextGen Route extension to the regular OpenShift Route resource type. Moreover, this later extension allows further separation between NetOps and DevOps.
Lastly it is worth to emphasise that BIG-IP allows these applications to be advertised using dynamic routing and GSLB allowing for redundant and global topologies.
We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.
In the next weeks, content will be released covering in detail the multi-cluster features of F5 BIG-IP with CIS.