F5 BIG-IP deployment with OpenShift - publishing application options
Introduction
This is a follow-up article to F5 BIG-IP deployment with OpenShift - platform and networking options where the multiple deployment options are shown. It is highly recommended to read that article first.
Publishing applications through F5 BIG-IP adds many security and traffic management features. This includes, for example, publishing applications from one or multiple clusters in a single VIP and additionally advertise using dynamic routing or through GSLB (DNS-based load balancing). These features are configured and automated in a Kubernetes native way thanks to the Container Ingress Services (CIS) and the F5 IPAM Controller (FIC) components. These components and the F5 CRDs extensions also enable separation between NetOps and DevOps teams.
A summary of these features is shown in the next figure. Please refer to the official documentation for a detailed explanation on them.
Publishing applications in BIG-IP: choosing between a Layer 4 or Layer 7 approach
There are many features that are common to both Layer 4 and Layer 7 publishing. The features most usually used are:
- IPAM (IP address management)
- GSLB (DNS-based) load balancing
- Anti-DDoS protection.
- IP Intelligence.
- VIP source address filtering.
- Many traffic management options such as dynamic routing or iRules.
Of these, IPAM and GSLB will be covered in dedicated sections.
In this article it will be also given an opinionated overview of the resource types available and which ones are preferred.
Publishing Layer 7 routes in OpenShift
This case refers to 1-tier o 2-tier deployments where the BIG-IP operates above the Layer 4, that is: it can operate at SSL/TLS, HTTP or above layers (content) allowing features such as:
- Advanced WAF.
- Bot protection.
- Identity Federation and Access Management.
- L7 persistence (on top of L4 persistence).
- Content manipulation.
- TLS offloading and security with HSM support (hardware, network and cloud options available).
The BIG-IP is able to publish at Layer 7 with the Ingress, Route and VirtualServer resource types:
- The standard Ingress resource is limited in its functionalities yet CIS extends it somewhat with annotations or the legacy Override ConfigMap capability. This resource type should only be used when doing simple configurations only.
- The VirtualServer F5 CRD aims to implement all functionalities possible. The functionalities are parameters incorporated in the schema of the CRD or are referenced in a companion Policy F5 CRD, therefore doesn´t require the use of annotations.
- The OpenShift Route resource is by default limited in its functionalities, likewise the Ingress resource. With the F5 NextGen Route extension its functionalities are the same as the ones offered by VirtualServer CRD by means of ConfigMaps. Note that although the ConfigMaps cannot be schema validated by Kubernetes these are YAML formated and validated by CIS. This extension can also make use of the Policy F5 CRD.
An important differentiation between the NextGen Route extension compared to the VirtualServer CRD is that it provides additional NetOps/DevOps separation capabilities. Using NextGen Routes, NetOps can pre-set parameters in a Global ConfigMap and optionally allow these to be overriden by DevOps. These parameters are:
- By a using a Local (per namespace) ConfigMap, the Virtual Server name, its VIP address or any parameter available in a Policy CRD that can be referenced.
- The WAF Policy and Allow Source range parameters can also be overriden as annotations.
On the other hand, note that SSL/TLS profiles specified by DevOps (either embedded or as annotation in the Route manifest) have precendence over the NetOps specified ones which are used as default settings.
Because of the above, it is suggested to use either the VirtualServer F5 CRD for a fully schema validated API or Routes in NextGen Route mode when additional NetOps/DevOps separation is desired.
Publishing Layer 7 routes with Ingress or Route resources in OpenShift
By default, OpenShift's router is configured to implement the Ingress and Route resources defined in any namespace. When using an alternative Ingress Controller such as BIG-IP, the OpenShift´s built-in router should be instructed to do not read the manifests intended for the BIG-IP. This is best achieved by the route sharding mechanism as shown in the next OpenShift's IngressController configuration:
namespaceSelector:
matchExpressions:
- key: router
operator: NotIn
values:
- bigip
The above sample configuration will avoid that the OpenShift routers handle any namespace with the label router: bigip in it. This way, OpenShift's system Routes will continue being handled by the built-in OpenShift router. Note that the IngressClass mechanism is not suitable because it would not apply to Route resources.
This route sharding is not necessary when using CIS in F5 CRD mode (--custom-resource-mode option). In this mode ConfigMaps, Routes, and Ingress are not processed by CIS..
Publishing services at Layer 4
This case refers to 1-tier o 2-tier deployments where the workload PODs or Ingress Controllers (respectively) are exposed using a virtual server type which doesn´t have an HTTP profile attached and therefore can only take actions up to the Layer 4 of the traffic.
In addition to the common features available in both Layer 4 and Layer 7 publishing, and depending on the resource types used, the following features are additionally available to Layer 4 resource types:
- Option of choosing between standard (full-proxy) or performance (plain Layer 4) virtual servers types.
- UDP, TCP and SCTP protocol support.
- L4 based persistence methods.
These are the resource types available:
- Service type LoadBalancer. This exposes the service with a standard (full-proxy) virtual server. This standard Kubernetes mode doesn´t have much built-in functionality and it has been extended by means of annotations which can refer to IPAM (required), health checking or to a traffic Policy CRD.
- IngressLink F5 CRD. This is a resource type specific to 2-tier deployments and used in conjunction with NGINX+. It makes use of the PROXY protocol to expose the client IP address. This resource type has been mainly superseeded by the TransportServer CRD described next.
- TransportServer F5 CRD. This is the recommended resource type, alike the VirtualServer F5 CRD used for Layer 7, all the functionalities are parameters incorporated in the schema of the CRD or are referenced in a companion Policy F5 CRD, therefore it doesn´t require the use of annotations. It is also possible to choose between a standard (full-proxy) or a performance (plain layer 4) virtual server types.
Publishing applications with a 1-tier or 2-tier arrangement
Publishing the applications in 2-tier arrangement
Tier-2 deployments are the canonical model of load balancing in Kubernetes. It allows independence from the infrastructure and has a clear demarcation between NetOps and DevOps. 1-tier deployments can also provide a level of demarcation between of NetOps and DevOps which will be shown in the section after this.
The next figure shows the demarcation between NetOps and DevOps in 2-tier arrangements. It can be seen that the Ingress Controller is implemented inside the cluster. This is usually the OpenShift built-in router but it can be NGINX+ or any other. The only requirement is that the Ingress Controller is exposed by a Service. It is encouraged to use the ClusterIP type to avoid the additional kube-proxy indirection layer of NodePort.
As it can be seen above, using the BIG-IP as External Load Balancer can be configured with Layer 4 services or Layer 7. Using an external load balancer in Layer 7 mode allows to implement services such as SSL/TLS offloading (including the use of an HSM), TCP and HTTP optimizations (connection pooling), Web Application Firewall, Bot protection, content manipulation, etc... in short: any service available in BIG-IP.
An important feature of this arrangement in Layer 7 mode is that the BIG-IP can expose a single VIP and spread the requests according to the route sharding configured, spreading the load across the different ingress controllers. Some possibilities are shown next:
The next considerations apply to any External Load Balancer in a 2-tier arrangement:
- Persistence to the workload PODs requires coordination with the Ingress Controller given that the External Load Balancer only steers the traffic to the Ingress Controllers. This is done by means of the use of source or cookie persistence, X-Forwarded-For headers or the PROXY protocol in the two tiers.
- External Load Balancers also don't have visibility of the workload PODs' health or even knows the number of these behind an Ingress Controller. Ideally, Ingress Controllers should have Readyness probe indicating when no workload PODs are available.
Publishing the applications in 1-tier deployments
In this case, the BIG-IP has the role of both External Load Balancer and Ingress Controller simultaneously and only a single manifest is required to access the application. Persistence and Health check is managed directly by the BIG-IP at workload POD level. An overview of the possible resource types to use is shown in the next figure.
With a 1-tier arrangement there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. Next is proposed a solution for this using the IPAM cotroller. The roles and responsibilities would be as follows:
- The NetOps team would be responsible for setting up the BIG-IP along its basic configuration, up to the network connectivity with the cluster CNI.
- The NetOps team would also be responsible of setting up the IPAM Controller and with it the assignment of the IP addresses for each DevOps team or project.
- The NetOps team would also setup the CIS instances. Each DevOps team or set of projects would have their own CIS instance which would be fed with IP addresses from the IPAM controller.
Each CIS instance would be watching each project's namespaces. These namespaces are owned by the different DevOps teams. The CIS configuration will specify the partition in the BIG-IP for the DevOps team or project.
- The DevOps team, as expected, deploys their own applications and create Kubernetes Service definitions for CIS consumption.
- Moreover, the DevOps team will also define how the Services will be published. These means creating Ingress, Route or any other CRD definition for publishing the services which are constrained by NetOps-owned IPAM controller and CIS instances.
Additionally, when using NextGen Routes it is possible that NetOps can pre-set and enfroce parameters for the Routes or alternatively trust the DevOps teams allowing DevOps teams to override these. This is described in the Publishing Layer 7 routes in OpenShift section above.
IPAM
IPAM allows the DevOps teams to allocate new VIPs without requiring NetOps involvement. At present IPAM is supported in the following resource types:
VirtualServer CRD |
TransportServer CRD |
Service type: Loadbalancer |
IngressLink CRD |
This means that at time of this writting IPAM is not supported with Ingress or Route resources (including NextGen Routes extension). Please fill an RFE (Request for Enhancement) in CIS github repository to incorporate this feature.
IPAM currently can retrieve IP addresses from two providers:
- F5 IPAM provider: This is implemented in the F5 IPAM controller itself and allows to specify custom ranges of IPs for different IP pools. For persistence, the allocations are stored in a PVC.
- Infoblox: a well known DNS & IPAM provider.
Using IPAM is done by specifying two parameters in the manifests:
- ipamLabel: selects the pool of addresses from where to retrieve the IP address from. Typical examples of these labels are project names (web, accounts), exposure (dmz, internal) or development stage (devel, production) but it could be anything really.
- hostGroup: when specifying the same value in different manifests, it will make that the IP address used is the same in all the matching manifests.
This process can be seen in the next picture:
Publishing GSLB / EDNS support
This allows for doing DNS-based load balancing of the VIPs that the different resource types expose. This is achieved using the BIG-IP DNS module (formerly known as GTM). It is possible that the BIG-IP DNS modules are in the same BIG-IP where CIS exposes the VIPs or in remote BIG-IPs. The DNS configuration will be written by CIS to a single BIG-IP DNS. After this, the different BIG-IP DNS instances automatically synchronize the GSLB configuration and updates.
In order to publish a service in GSLB, DevOps only has to create an EDNS F5 CRD and match its domain name parameter with the hostname specified with any of the resource types supported:
- VirtualServer F5 CRD.
- Routes using the NextGen Route extension.
- IngressLink F5 CRD.
- TransportServer F5 CRD.
Conclusion and closing remarks
F5 BIG-IPs exposes applications with over the top advanced traffic management and security features in a Kubernetes native way.
Using F5 BIG-IP in Layer 7 mode brings all these features and allow for use cases like route sharding in a single VIP.
In this article it is recommended the use of F5's CRDs or the use of NextGen Route extension to the regular OpenShift Route resource type. Moreover, this later extension allows further separation between NetOps and DevOps.
Lastly it is worth to emphasise that BIG-IP allows these applications to be advertised using dynamic routing and GSLB allowing for redundant and global topologies.
We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.
In the next weeks, content will be released covering in detail the multi-cluster features of F5 BIG-IP with CIS.