series-big-ip-with-openshift
7 Topics3 Ways to use F5 BIG-IP with OpenShift 4
F5 BIG-IP can provide key infrastructure and application services in a RedHat OpenShift 4 environment.Examples include providing core load balancing for the OpenShift API and Router, DNS services for the cluster, a supplement or replacement for the OpenShift Router, and security protection for the OpenShift management and application services. #1. Core Services OpenShift 4 requires a method to provide high availability to the OpenShift API (port 6443), MachineConfig (22623), and Router services (80/443).BIG-IP Local Traffic Manager (LTM) can provide these trusted services easily.OpenShift also requires several DNS records that the BIG-IP can provide accelerated responses as a DNS cache and/or providing Global Server Load Balancing of cluster DNS records. Additional documentation about OpenShift 4 Network Requirements (RedHat) Networking Requirements for user-provisioned infrastructure #2 OpenShift Router RedHat provides their own OpenShift Router for L7 load balancing, but the F5 BIG-IP can also provide these services using Container Ingress Services.Instead of deploying load balancing resources on the same nodes that are hosting OpenShift workloads; F5 BIG-IP provides these services outside of the cluster on either hardware or Virtual Edition platforms.Container Ingress Services can run either as an auxiliary router to the included router or a replacement. Additional articles that are related to Container Ingress Services • Using F5 BIG-IP Controller for OpenShift #3 Security F5 can help filter, authenticate, and validate requests that are going into or out of an OpenShift cluster.LTM can be used to host sensitive SSL resources outside of the cluster (including on a hardware HSM if necessary) as well as filtering of requests (i.e. disallow requests to internal resources like the management console).Advanced Web Application Firewall (AWAF) policies can be deployed to stymie bad actors from reaching sensitive applications.Access Policy Manager can provide OpenID Connect services for the OpenShift management console and help with providing identity services for applications and microservices that are running on OpenShift (i.e. converting BasicAuth request into a JWT token for a microservice). Additional documentation related to attaching a security policy to an OpenShift Route • AS3 Override Where Can I Try This? The environment that was used to write this article and create the companion video can be found at: https://github.com/f5devcentral/f5-k8s-demo/tree/ocp4/ocp4. For folks that are part of F5 you can access this in our Unified Demo Framework and can schedule labs with customers/partners (search for "OpenShift 4.3 with CIS"). I plan on publishing a version of this demo environment that can run natively in AWS. Check back to this article for any updates. Thanks!8.5KViews6likes3CommentsConnect and Secure OpenShift cloud services with F5 Distributed Cloud
Check out the“OpenShift ANYWHERE with F5 Distributed Cloud” how-to video series. In these first two episodes, we will showcase how easy it is to use #F5 Distributed Cloud to discover services for Azure Red Hat OpenShift (#ARO) and Red Hat OpenShift on AWS (#ROSA), respectively, and make #OpenShift services available in any location, be it Public Clouds, be it Private Clouds, or the edge.1.3KViews3likes0CommentsLightboard Lessons: What is OpenShift?
The OpenShift Container Platform from RedHat is a platform as a service leveraging Docker and Kubernetes to provide app developers an easy button for application management, deployment, and scale. In this episode of Lightboard Lessons, Jason Rahm builds on his earlier videos on Docker and Kubernetes to discuss the value-added building blocks that OpenShift brings to the container table. Resources What are Containers? (video) What is Kubernetes? (video) What is Kubernetes? (DevCentral Basics article) Introduction to F5 Container Ingress Services693Views1like0CommentsBIG-IP deployment options with Openshift
NOTE: this article has been superseded by these updated articles: F5 BIG-IP deployment with OpenShift - platform and networking options F5 BIG-IP deployment with OpenShift - publishing application options NOTE: outdated content next This article is meant to be an agnostic overview of the possibilities on how to use BIG-IP with RedHat Openshift: either onprem or in the cloud, either in 1-tier or in 2-tier arrangements, possibly alongside NGINX+. This blog is structured as follows: Introduction BIG-IP platform flexibility: deployment, scalability and multi-tenancy options Openshift networking options BIG-IP networking options 1-tier arrangement 2-tier arrangement Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer Ingress and Route resources, the extensibility problem. Full flexibility & advanced services with AS3 Configmaps. F5 Custom Resource Definitions (CRDs). Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration Conclusion Introduction When using BIG-IP with RedHat Openshift Kubernetes a container component named Container Ingress Services (CIS from now on) is used to plug the BIG-IP APIs with the Kubernetes APIs. When a user configuration is applied or when a status change has occurred in the cluster then CIS automatically updates the configuration in the BIG-IP using the AS3 declarative API. CIS supports IP Address Management (IPAM from now on) by making use of F5 IPAM Controller (FIC from now on), which is deployed as container as well. The FIC IPAM controller can have it's own address database or be connected to an external provider such as Infoblox. It can be seen how these components fit together in the next picture. A single BIG-IP cluster can manage both VM and container workloads in the same cluster and separation between these can be set at administrative level with partitions and at network level with routing domains if required. BIG-IP offers a wide range of options to be used with RedHat Openshift. Often these have been driven by customer's requests. In the next sections we cover these options and the considerations to be taken into account to choose between them. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you wlll be find additional technical details. Please comment below if you have any question about this article. BIG-IP platform flexibility: deployment, scalability and multi-tenancy options First of all, it is needed to clarify that regardless of the deployment option chosen, this is independent of the BIG-IP being an appliance, a scale-out chassis or a Virtual Edition. The configuration is always the same. This platform flexibility also opens the possibilities of using different options of scalability, multi-tenancy, hardware accelerators or HSMs/NetHSMs/SaaS-HSMs to keep secure the SSL/TLS private keys in a FIPS compliant manner. The following options apply to a single BIG-IP cluster: A single BIG-IP cluster can handle several Openshift clusters. This requires at least a CIS instance per Openshift cluster instance. It is also possible that a given CIS instance manages a selected set of namespaces. These namespaces can be specified with a list or a label selector. In the BIG-IP each CIS instance will typically write in a dedicated partition, isolated from other CIS instances. When using AS3 ConfigMaps a single CIS can manage several BIG-IP partitions. As indicated in picture, a single BIG-IP cluster can scale-up horizontally with up to 8 BIG-IP instances, this is referred as Scale-N in BIG-IP documentation. When hard tenant isolation is required, then using a single BIG-IP cluster or a vCMP guest instance should be used. vCMP technology can be found in larger appliances and scale-out chassis. vCMP allows to run several independent BIG-IP instances as guests, allowing to run even different versions of BIG-IP. The guest can get allocated different amounts of hardware resources. In the next picture, guests are shown in different colored bars using several blades (grey bars). Openshift networking options Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and Openshift supports the following: OpenshiftSDN - supported since Openshift 3.x and still the default CNI. It makes use of VXLAN encapsulation. OVNKubernetes - supported since Openshift 4.4. It makes use of Geneve encapsulation. Feature wise these CNIs we can compare them from the next table from the Openshift documentation. Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading, reducing the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs, this is suggested in Openshift's documentation as well, and needs to be set at installation time in the install-config.yaml file, see this link for details. BIG-IP networking options The first thing that needs to be decided is how we want the BIG-IP to access the PODs: do we want that the BIG-IP access the PODs directly or do we want to use the typical arrangement of using a 2-tier Load Balancing with an in-cluster Ingress Controller? Equally important is to decide how we want to do NetOps/DevOps separation. CI/CD pipelines provide a management layer which allow several teams to approve or block changes before committing. We are going to takle how to achieve this separation without such an additional management layer. BIG-IP networking option - 1-tier arrangement In this arrangement, the BIG-IP is able to reach the PODs without any address translation . By only using a 1-tier of Load Balancing (see the next picture) the latency is reduced (potentially also increasing client's session performance). Persistence is handled easily and the PODs can be directly monitored, providing an accurate view of the application's health. As it can be seen in the picture above, in a 1-tier arrangement the BIG-IP is part of the CNI network. This is supported for both OpenshiftSDN and OVNKubernetes CNIs. Configuration for BIG-IP with OpenshiftSDN CNI can be found in clouddocs.f5.com. Currently, when using the OVNKubernetes CNI the hybrid-networking option has to be used. In this later case the Openshift cluster will extend its CNI network towards the BIG-IPs using VXLAN encapsulation instead of Geneve used internally within the Openshift nodes. BIG-IP configuration steps for OVNKubernetes in hybrid mode can be followed in this repository created by F5 PM Engineer Mark Dittmer until this is published in clouddocs.f5.com. With a 1-tier configuration there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. In the next diagram it is proposed a solution for this using the IPAM cotroller. The roles and responsibilities would be as follows: The NetOps team would be responsible of setting up the BIG-IP along its basic configuration, up to the the network connectivity towards the cluster including the CNI overlay. The NetOps team would be also responsible of setting up the IPAM Controller and with it the assignment of the IP addresses for each DevOps team or project. The NetOps team would also setup the CIS instances. Each DevOps team or set of projects would have their own CIS instance which would be fed with IP addresses from the IPAM controller. Each CIS instance would be watching each DevOps or project's namespaces. These namespaces are owned by the different DevOps teams. The CIS configuration will specify the partition in the BIG-IP for the DevOps team or project. The DevOps team, as expected, deploys their own applications and create Kubernetes Service definitions for CIS consumption. Moreover, the DevOps team will also define how the Services will be published. These means creating Ingress, Route or any other CRD definition for publishing the services which are constrained by NetOps-owned IPAM controller and CIS instances. BIG-IP networking option - 2-tier arrangement This is the typical way in which Kubernetes clusters are deployed. When using a 2-tier arrangement the External Load Balancer doesn't need to have awareness of the CNI and points to the NodePort addresses of the Ingress Controller inside the Kubernetes cluster. It is up to the infrastructure how to send the traffic to the Ingress Controllers. A 2-tier arrangement sets a harder line of the demarcation between the NetOps and DevOps teams. This type of arrangement using BIG-IP can be seen next. Most External Load Balancers can only perform L4 functionalities but BIG-IP can perform both L4 and L7 functionalities as we will see in the next sections. Note: the proxy protocol mentioned in the diagram is used to allow persistence based on client's IP in the Ingress Controller, regardless the traffic is sent encrypted or not. Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer This is a Kubernetes built-in mechanism to expose Ingress Controllers in any External Load Balancer. In other words, this method is meant for 2-tier topologies. This mechanism is very feature limited feature and extensibility is done by means of annotations. F5 CIS supports IPAM integration in this resource type. Check this link for all options possible. In general, a problem or limitation with Kubernetes annotations (regardless the resource type) is that annotations are not validated by the Kubernetes API using a chema therefore allowing the customer to set in Kubernetes bad configurations. The recommended practice is to limit annotations to simple configurations. Declarations with complex annotations will tend to silently fail or not behave as expected. Specially in these cases CRDs are recommended. These will be described further down. Ingress and Route resources, the extensibility problem. Kubernetes and Openshift provide the following resource types for publishing L7 routes for HTTP/HTTPS services: Routes: Openshift exclusive, eventually going to be deprecated. Ingress: Kubernetes standard. Although these are simple to use, they are very limited in functionality and more often than not the Ingress Controllers require the use of annotations to agument the functionality. F5 available annotations for Routes can be checked in this link and for Ingress resources in this link. As mentioned previously, complex annotations should be avoided. When publishing L7 routes, annotation's limitations are more evident and CRDs are even more recommended. Route and Ingress resources can be further augmented by means of using the CIS feature named Override AS3 ConfigMap which allows to specify an AS3 declaration and attach it to a Route or Ingress definition. This gives access to use almost all features & modules available in BIG-IP as exhibit in the next picture. Although Override AS3 ConfigMap eliminates the annotations extensibility limitations it shares the problem that these are not validated by the Kubernetes API using the AS3 schema. Instead, it is validated by CIS but note that ConfigMaps are not capable of reporting the status the declaration. Thus the ConfigMap declaration status can only be checked in CIS logs. Override AS3 ConfigMaps declarations are meant to be applied to the all the services published by the CIS instance. In other words, this mechanism is useful to apply a general policy or shared configuration across several services (ie: WAF, APM, elaborated monitoring). Full flexibility and advanced services with AS3 ConfigMap The AS3 ConfigMap option is similar to Override AS3 ConfigMap but it doesn't rely in having a pre-existing Ingress or a Route resource. The whole BIG-IP configuration is setup in the ConfigMap. Using Full AS3 ConfigMaps with the --hubmode CIS option allows to define the services in a DevOps' owned namespaces and the VIP and associated configurations (ie: TLS settings, IP intelligence, WAF policy, etc...) in a namespace owned by the DevOps team. This provides independence between the two teams. Override AS3 ConfigMaps tend to be small because these are just used to patch the Ingress and Route resources. In other words, extending Ingress and Route-generated AS3 configuration. On the other hand, using full AS3 ConfigMaps require creating a large AS3 JSON declaration that Ingress/Route users are not used to. Again, the AS3 definition within the ConfigMap is validated by BIG-IP and not by Kubernetes which is a limitation because the status of the configuration can only be fully checked in CIS logs. F5 Custom Resource Definitions (CRDs) Above we've seen the Kubernetes built-in resource types and their advanced services & flexibility limitations. We've also seen the swiss-army knife that AS3 ConfigMaps are and the limitation of it not being Kubernetes schema-validated. Kubernetes allows API augmentation by allowing Custom Resource Definitions (CRDs) to define new resource types for any functionality needed. F5 has created the following CRDs to provide the easiness of built-in resource types but with greater functionality without requiring annotations. Each CRD is focused in different use cases: IngressLink aims to simplify 2-tier deployments when using BIG-IP and NGINX+. By using IngressLink CRD instead of a Service of type LoadBalancer. At present the IngressLink CRD provides the following features : Proxy Protocol support or other customizations by using iRules. Automatic health check monitoring of NGINX+ readiness port in BIG-IP. It's possible to link with NGINX+ either using NodePort or Cluster mode, in the later case bypassing any kube-proxy/iptables indirection. More to come... When using IngressLink it automatically exposes both ports 443 and port 80 sending the requests to NGINX+ Ingress Controller. TransportServer is meant to expose non-HTTP traffic configuration, it can be any TCP or UDP traffic on any traffic and it offers several controls again, without requiring using annotations. VirtualServer has L7 routes oriented approach analogous to Ingress/Route resources but providing advanced configurations whilst avoiding using annotations or override AS3 ConfigMaps. This can be used either in a 1 tier or 2-tier arrangement as well. In the later case the BIG-IP would take the function of External LoadBalancer of in-cluster Ingress Controllers yet providing advanced L7 services. All these new CRDs support IPAM. Summary of BIG-IP CIS Kubernetes resource types So what resource types should It be used? The next tables try to summarize the features, strengths and usability of them. Easeof use Network topology and overall suitability Comparing CRDs, Ingress/Routes and ConfigMaps Please note that the features of the different resources is continuously changing please check the latest docs for more up to date information. Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration CIS Installation can be performed in different ways: Using Kubernetes resources (named manual in F5 clouddocs) - this approach is the most low level one and allows for ultimate customization. Using Helm chart. This provides life-cycle management of the CIS installation in any Kubernetes cluster. Using CIS Operator. Built on top of the Helm chart it additionally provides Openshift integrated management. In the screenshots below we can see how the Openshift Operator construct allows for automatic download and updates. We can also see the use of the F5BigIpCtlr resource type to configure the different instances At present IPAM controller installation is only done using Kubernetes resources. After these components are created it is needed to create the VxLAN configuration in the BIG-IP, this can be automated using using any of BIG-IP automations, mainly Ansible and Terraform. Conclusion F5 BIG-IPs provides several options for deployment in Openshift with unmatched functionality either used as External Load Balancer as Ingress Controller achieving a single Tier setup. Three components are used for this integrator: The F5 Container Ingress Services (CIS) for plugging the Kubernetes API with BIG-IP. The F5 ConOpenshift Operator for installing and managing CIS. The F5 IPAM controller. Resource types are the API used to define Services or Ingress Controllers publishing in the F5 BIG-IP. These are constantly being updated and it is recommended to check F5 clouddocs for up to date information. We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.3KViews1like3CommentsUsing F5 BIG-IP Controller Operator for OpenShift
Today A&O PM and PD team announced the availability of Certified F5 BIG-IP Controller Operator (using Helm Charts) on OpenShift 4.x platforms. In this document we discuss about Install, Configure and Deploy CIS using RedHat Certified F5 BIG-IP Controller Operator on OpenShift 4.x Platforms. Introduction What is an Operator? - A method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl/oc tooling. You can think of Operators as the runtime that manages this type of application on Kubernetes. Conceptually, an Operator takes human operational knowledge and encodes it into software that is more easily packaged and shared with consumers. F5 BIG-IP Controller Operator is a Service Operator which installs F5 BIG-IP Controller (Container Ingress Services) on OpenShift platforms 4.x. Prerequisites OpenShift 4.x BIG-IP (F5 CIS supported versions) In this document we will use Code Ready Containers to install, Configure and deploy CIS using F5 BIG-IP Controller Operator. CRC 1.7.0 installs OCP 4.3.1 on you laptop. Get your suitable image from CRC Repo and follow the instructions to install CRC and bringup your single node OCP 4.3.1 cluster. Install, Configure and Deploy CIS using Operator Accessing OCP 4.3.1 web console From CLI, login as admin using CRC given credentials. $ eval $(crc oc-env) $ oc login -u kubeadmin -p db9Dr-J2csc-8oP78-9sbmf https://api.crc.testing:6443 Here, the username is 'kubeadmin'. and password is 'db9Dr-J2csc-8oP78-9sbmf' to login OCP web console. Installing Operator From the left Menu bar, access Operator Hub and search for "f5" to see the Certified F5 BIG-IP controller Operator in the listing as below. Click on Install to install this Operator. Installing Operator is a guided process. The below screen shows different options to subscribe for this Operator. Select the highlighted options. Click subscribe. Approval Strategy: Manual: Requires administrator approval to install new updates. Automatic: When a new release is available, updated automatic. (default) When Operator is Subscribed, Operator is installed based on approval strategy. An Installed Operator screen is as below. Configuring and Deploying F5 BIG-IP Controller Instance Click on "F5 BIG-IP Controller" or "F5BigIPCtlr" under Provided APIs column to create an Instance of F5 BIG-IP Controller. Creating a F5BigIpCtlr instance screen is as shown below. The Screen provides an editor to configure CIS/F5 BIG-IP Controller with required deployment options. A sample Controller deployment configuration is as shown below apiVersion: cis.f5.com/v1 kind: F5BigIpCtlr metadata: name: f5-server namespace: openshift-operators spec: args: manage_routes: true agent: as3 log_level: DEBUG route_vserver_addr: 172.16.1.4 bigip_partition: ocp openshift_sdn_name: /Common/openshift_vxlan bigip_url: 172.16.2.23 log_as3_response: true insecure: true pool-member-type: cluster bigip_login_secret: f5-bigip-ctlr-login image: pullPolicy: Always repo: k8s-bigip-ctlr user: f5networks namespace: kube-system rbac: create: true resources: {} serviceAccount: create: true version: latest Create BIG-IP controller login secret and update the same in above configuration. Update the YAML and click on Create. Based on Namespace and configuration options, CIS is installed. When Operator deploys the controller, we can see the updated YAML of the CustomResource Instance. An example below. Name: f5-server Namespace: openshift-operators Labels: <none> Annotations: <none> API Version: cis.f5.com/v1 Kind: F5BigIpCtlr Metadata: Creation Timestamp: 2020-02-08T00:31:21Z Finalizers: uninstall-helm-release Generation: 1 Resource Version: 245330 Self Link: /apis/cis.f5.com/v1/namespaces/openshift-operators/f5bigipctlrs/f5-server UID: 546d3890-4a0a-11ea-a1cf-0ef0e3c74fbe spec: args: agent: as3 bigip_partition: ocp bigip_url: 172.16.2.23 insecure: true log_as3_response: true log_level: DEBUG manage_routes: true openshift_sdn_name: /Common/openshift_vxlan pool_member_type: cluster route_vserver_addr: 172.16.1.4 bigip_login_secret: f5-bigip-ctlr-login Image: PullPolicy: Always Repo: k8s-bigip-ctlr Tag: latest User: f5networks Namespace: kube-system Rbac: Create: true Resources: Service Account: Create: true Name: <nil> Status: Conditions: Last Transition Time: 2020-02-08T00:31:21Z Status: True Type: Initialized Last Transition Time: 2020-02-08T00:31:23Z Message: F5 BIG-IP controller: f5-server General Controller Documentation: - Kubernetes: http://clouddocs.f5.com/containers/latest/kubernetes/index.html - OpenShift: http://clouddocs.f5.com/containers/latest/openshift/index.html Using Ingress? There's a helm chart for that: - https://github.com/F5Networks/charts/tree/master/src/stable/f5-bigip-ingress Using Routes in OpenShift? No helm chart yet, but we do have great documentation: - http://clouddocs.f5.com/containers/latest/openshift/kctlr-openshift-routes.html Reason: InstallSuccessful Status: True Type: Deployed Deployed Release: Manifest: . . . . . . . . . . We can verify from CLI or GUI. $ oc get pods -n kube-system NAME READY STATUS RESTARTS AGE f5-server-f5-bigip-ctlr-7c77d6846f-z7bhp 1/1 Running 0 112s Congratulations! Your F5 BIG-IP Controller is deployed using F5 BIG-IP Controller Operator. Additional Resources Operator Code: https://github.com/F5Networks/k8s-bigip-ctlr/tree/master/operator Operator Image: https://access.redhat.com/containers/#/registry.connect.redhat.com/f5networks/k8s-bigip-ctlr-operator Known Issues When Custom Resource Instance is created, instance listing doesn’t show Status [1] in the GUI. [1] https://github.com/operator-framework/operator-sdk/issues/24913.2KViews1like2Comments