3 Ways to use F5 BIG-IP with OpenShift 4
F5 BIG-IP can provide key infrastructure and application services in a RedHat OpenShift 4 environment. Examples include providing core load balancing for the OpenShift API and Router, DNS services for the cluster, a supplement or replacement for the OpenShift Router, and security protection for the OpenShift management and application services.
#1. Core Services
OpenShift 4 requires a method to provide high availability to the OpenShift API (port 6443), MachineConfig (22623), and Router services (80/443). BIG-IP Local Traffic Manager (LTM) can provide these trusted services easily. OpenShift also requires several DNS records that the BIG-IP can provide accelerated responses as a DNS cache and/or providing Global Server Load Balancing of cluster DNS records.
Additional documentation about OpenShift 4 Network Requirements (RedHat)
#2 OpenShift Router
RedHat provides their own OpenShift Router for L7 load balancing, but the F5 BIG-IP can also provide these services using Container Ingress Services. Instead of deploying load balancing resources on the same nodes that are hosting OpenShift workloads; F5 BIG-IP provides these services outside of the cluster on either hardware or Virtual Edition platforms. Container Ingress Services can run either as an auxiliary router to the included router or a replacement.
Additional articles that are related to Container Ingress Services
• Using F5 BIG-IP Controller for OpenShift
#3 Security
F5 can help filter, authenticate, and validate requests that are going into or out of an OpenShift cluster. LTM can be used to host sensitive SSL resources outside of the cluster (including on a hardware HSM if necessary) as well as filtering of requests (i.e. disallow requests to internal resources like the management console). Advanced Web Application Firewall (AWAF) policies can be deployed to stymie bad actors from reaching sensitive applications. Access Policy Manager can provide OpenID Connect services for the OpenShift management console and help with providing identity services for applications and microservices that are running on OpenShift (i.e. converting BasicAuth request into a JWT token for a microservice).
Additional documentation related to attaching a security policy to an OpenShift Route
Where Can I Try This?
The environment that was used to write this article and create the companion video can be found at: https://github.com/f5devcentral/f5-k8s-demo/tree/ocp4/ocp4.
For folks that are part of F5 you can access this in our Unified Demo Framework and can schedule labs with customers/partners (search for "OpenShift 4.3 with CIS"). I plan on publishing a version of this demo environment that can run natively in AWS. Check back to this article for any updates. Thanks!
- Eric_ChenEmployee
I plan on updating this article with links to additional articles. Please let me know if you have any comments/questions. Thanks for reading!
- tomekNimbostratus
Hi very helpful artice.
Maybe do you know how configure F5 to create vxlan tunnels between f5 and two Openshift cluster ?
- yogeesh-venkannaNimbostratus
Hi Eric_Chen ,
I am planning to migrate the infrastructure part from haproxy LB to F5 LB (VE / BM). I read your article, but unable to completely understand it. I could not configure LB using FAST template, but based on the configuration in this file https://github.com/chen23/openshift-4-3/blob/master/fast/openshift/2_infrastructure.yaml from your repository, configured the LB manually. Could install the masters successfully but the workers are unable to join the cluster.
I see the below errors on all worker nodes.
root@yv-ops01 core]# journalctl -f
-- Logs begin at Thu 2022-02-17 06:47:29 UTC. --
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.150237 1481 kubelet.go:2223] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.180727 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.281716 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: I0218 02:46:11.289240 1481 csi_plugin.go:1031] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "yv-ops01.yv.okd" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.382485 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.483330 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.584338 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.685116 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.786245 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.887370 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:11 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:11.988190 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:12.088827 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:12.190147 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: I0218 02:46:12.206303 1481 csi_plugin.go:1031] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "yv-ops01.yv.okd" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:12.291292 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:12.392020 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:12.492859 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:12.593884 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"
Feb 18 02:46:12 yv-ops01.yv.okd hyperkube[1481]: E0218 02:46:12.694361 1481 kubelet.go:2303] "Error getting node" err="node \"yv-ops01.yv.okd\" not found"Can you please help / guide me in this regard?
If I have raised it on the wrong forum, please direct me to the right channel / path?
Thanks in advance.