series-big-ip-with-openshift
7 Topics3 Ways to use F5 BIG-IP with OpenShift 4
F5 BIG-IP can provide key infrastructure and application services in a RedHat OpenShift 4 environment.Examples include providing core load balancing for the OpenShift API and Router, DNS services for the cluster, a supplement or replacement for the OpenShift Router, and security protection for the OpenShift management and application services. #1. Core Services OpenShift 4 requires a method to provide high availability to the OpenShift API (port 6443), MachineConfig (22623), and Router services (80/443).BIG-IP Local Traffic Manager (LTM) can provide these trusted services easily.OpenShift also requires several DNS records that the BIG-IP can provide accelerated responses as a DNS cache and/or providing Global Server Load Balancing of cluster DNS records. Additional documentation about OpenShift 4 Network Requirements (RedHat) Networking Requirements for user-provisioned infrastructure #2 OpenShift Router RedHat provides their own OpenShift Router for L7 load balancing, but the F5 BIG-IP can also provide these services using Container Ingress Services.Instead of deploying load balancing resources on the same nodes that are hosting OpenShift workloads; F5 BIG-IP provides these services outside of the cluster on either hardware or Virtual Edition platforms.Container Ingress Services can run either as an auxiliary router to the included router or a replacement. Additional articles that are related to Container Ingress Services • Using F5 BIG-IP Controller for OpenShift #3 Security F5 can help filter, authenticate, and validate requests that are going into or out of an OpenShift cluster.LTM can be used to host sensitive SSL resources outside of the cluster (including on a hardware HSM if necessary) as well as filtering of requests (i.e. disallow requests to internal resources like the management console).Advanced Web Application Firewall (AWAF) policies can be deployed to stymie bad actors from reaching sensitive applications.Access Policy Manager can provide OpenID Connect services for the OpenShift management console and help with providing identity services for applications and microservices that are running on OpenShift (i.e. converting BasicAuth request into a JWT token for a microservice). Additional documentation related to attaching a security policy to an OpenShift Route • AS3 Override Where Can I Try This? The environment that was used to write this article and create the companion video can be found at: https://github.com/f5devcentral/f5-k8s-demo/tree/ocp4/ocp4. For folks that are part of F5 you can access this in our Unified Demo Framework and can schedule labs with customers/partners (search for "OpenShift 4.3 with CIS"). I plan on publishing a version of this demo environment that can run natively in AWS. Check back to this article for any updates. Thanks!8.4KViews6likes3CommentsDeploying NGINX Ingress Controller with OpenShift on AWS Managed Service: ROSA
Introduction In March 2021, Amazon and Red Hat announced the General Availability of Red Hat OpenShift Service on AWS (ROSA). ROSA is a fully-managed OpenShift service, jointly managed and supported by both Red Hat and Amazon Web Services (AWS). OpenShift offers users several different deployment models. For customers that require a high degree of customization and have the skill sets to manage their environment, they can build and manage OpenShift Container Platform (OCP) on AWS. For those who want to alleviate the complexity in managing the environment and focus on their applications, they can consume OpenShift as a service, or Red Hat OpenShift Service on AWS (ROSA). The benefits of ROSA are two-fold. First, we can enjoy more simplified Kubernetes cluster creation using the familiar Red Hat OpenShift console, features, and tooling without the burden of manually scaling and managing the underlying infrastructure. Secondly,the managed service made easier with joint billing, support, and out-of-the-box integration to AWS infrastructure and services. In this article, I am exploring how to deploy an environment with NGINX Ingress Controller integrated into ROSA. Deploy Red Hat OpenShift Service on AWS (ROSA) The ROSA service may be deployed directly from the AWS console.Red Hat has done a great job in providing the instructions on creating a ROSA cluster in the Installation Guide. The guide documents the AWS prerequisites, required AWS service quotas, and configuration of your AWS accounts. We run the following commands to ensure that the prerequisites are met before installing ROSA. -Verify that my AWS account has the necessary permissions: ❯ rosa verify permissions I: Validating SCP policies... I: AWS SCP policies ok -Verify that my AWS account has the necessary quota to deploy a Red Hat OpenShift Service on the AWS cluster. ❯ rosa verify quota --region=us-west-2 I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html Next, I ran the following command to prepare my AWS account for cluster deployment: ❯ rosa init I: Logged in as 'ericji' on 'https://api.openshift.com' I: Validating AWS credentials... I: AWS credentials are valid! I: Validating SCP policies... I: AWS SCP policies ok I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html I: Ensuring cluster administrator user 'osdCcsAdmin'... I: Admin user 'osdCcsAdmin' created successfully! I: Validating SCP policies for 'osdCcsAdmin'... I: AWS SCP policies ok I: Validating cluster creation... I: Cluster creation valid I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.7.19 If we were to follow their instructions to create a ROSA cluster using therosaCLI, after about 35 minutes our deployment would produce a Red Hat OpenShift cluster along with the needed AWS components. ❯ rosa create cluster --cluster-name=eric-rosa I: Creating cluster 'eric-rosa' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'eric-rosa' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c eric-rosa'. I: To watch your cluster installation logs, run 'rosa logs install -c eric-rosa --watch'. Name:eric-rosa … During the deployment, we may enter the following command to follow the OpenShift installer logs to track the progress of our cluster: > rosa logs install -c eric-rosa --watch After the Red Hat OpenShift Service on AWS (ROSA) cluster is created, we must configure identity providers to determine how users log in to access the cluster. What just happened? Let's review what just happened. The above installation programautomatically set up the following AWS resources for the ROSA environment: AWS VPC subnets per Availability Zone (AZ). For single AZ implementations two subnets were created (one public one private) The multi-AZ implementation would make use of three Availability Zones, with a public and private subnet in each AZ (a total of six subnets). OpenShift cluster nodes (or EC2 instances) Three Master nodes were created to cater for cluster quorum and to ensure proper fail-over and resilience of OpenShift. At least two infrastructure nodes, catering for build-in OpenShift container registry, OpenShift router layer, and monitoring. Multi-AZ implementations Three Master nodes and three infrastructure nodes spread across three AZs Assuming that application workloads will also be running in all three AZs for resilience, this will deploy three Workers. This will translate to a minimum of nine EC2 instances running within the customer account. A collection ofAWS Elastic Load Balancers, some of these Load balancers will provide end-user access to the application workloads running on OpenShift via the OpenShift router layer, other AWS elastic load balancers will expose endpoints used for cluster administration and management by the SRE teams. Source:https://aws.amazon.com/blogs/containers/red-hat-openshift-service-on-aws-architecture-and-networking/ Deploy NGINX Ingress Controller The NGINX Ingress Operator is a supported and certified mechanism for deployingNGINX Ingress Controller in an OpenShift environment, with point-and-click installation and automatic upgrades. It works for both the NGINX Open Source-basedand NGINX Plus-basededitionsof NGINX Ingress Controller. In thistutorial, I’ll bedeploying theNGINX Plus-based edition.ReadWhy You Need an Enterprise-Grade Ingress Controller on OpenShiftforuse casesthat merit the use of this edition.If you’re not sure how these editions are different, readWait, Which NGINX Ingress Controller for Kubernetes Am I Using? Iinstall the NGINX Ingress Operator from the OpenShift console.There are numerous options you can set when configuring the NGINX Ingress Controller, as listedinour GitHubrepo.Here is a manifestexample: apiVersion: k8s.nginx.org/v1alpha1 kind: NginxIngressController metadata: name: my-nginx-ingress-controller namespace: openshift-operators spec: ingressClass: nginx serviceType: LoadBalancer nginxPlus: true type: deployment image: pullPolicy: Always repository: ericzji/nginx-plus-ingress tag: 1.12.0 To verify the deployment, run the following commands in a terminal. As shown in the output, the manifest I used in the previous step deployed two replicas of the NGINX Ingress Controller and exposed them with aLoadBalancerservice. ❯ oc get pods -n openshift-operators NAMEREADYSTATUSRESTARTSAGE my-nginx-ingress-controller-b556f8bb-bsn4k1/1Running014m nginx-ingress-operator-controller-manager-7844f95d5f-pfczr2/2Running03d5h ❯ oc get svc -n openshift-operators NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE my-nginx-ingress-controllerLoadBalancer172.30.171.237a2b3679e50d36446d99d105d5a76d17f-1690020410.us-west-2.elb.amazonaws.com80:30860/TCP,443:30143/TCP25h nginx-ingress-operator-controller-manager-metrics-serviceClusterIP172.30.50.231<none> With NGINX Ingress Controller deployed, we'll have an environment that looks like this: Post-deployment verification After the ROSA cluster was configured, I deployed an app (Hipster) in OpenShift that is exposed by NGINX Ingress Controller (by creating an Ingress resource). To use a custom hostname,it requires that we manually change your DNS record on the Internetto point to the IP address value of AWS Elastic Load Balancer. ❯ dig +short a2dc51124360841468c684127c4a8c13-808882247.us-west-2.elb.amazonaws.com 34.209.171.103 52.39.87.162 35.164.231.54 I made this DNS change (optionally, use a local host record), and we will see my demo app available on the Internet, like this: Deleting your environment To avoid unexpected charges, don't forget to delete your environment if you no longer need it. ❯ rosa delete cluster -c eric-rosa --watch ? Are you sure you want to delete cluster eric-rosa? Yes I: Cluster 'eric-rosa' will start uninstalling now W: Logs for cluster 'eric-rosa' are not available … Conclusion To summarize, ROSA allows infrastructure and security teams to accelerate the deployment of the Red Hat OpenShift Service on AWS. Integration with NGINX Ingress Controller provides comprehensive L4-L7 security services for the application workloads running on Red Hat OpenShift Service on AWS (ROSA). As a developer, having your clusters as well as security services maintained by this service gives you the freedom to focus on deploying applications. You have two options for gettingstarted with NGINX Ingress Controller: Download the NGINX Open Source-based version of NGINX Ingress Controller fromour GitHub repo. If you prefer to bring your own license to AWS,get a free trialdirectly from F5 NGINX.5KViews0likes0CommentsUsing F5 BIG-IP Controller Operator for OpenShift
Today A&O PM and PD team announced the availability of Certified F5 BIG-IP Controller Operator (using Helm Charts) on OpenShift 4.x platforms. In this document we discuss about Install, Configure and Deploy CIS using RedHat Certified F5 BIG-IP Controller Operator on OpenShift 4.x Platforms. Introduction What is an Operator? - A method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl/oc tooling. You can think of Operators as the runtime that manages this type of application on Kubernetes. Conceptually, an Operator takes human operational knowledge and encodes it into software that is more easily packaged and shared with consumers. F5 BIG-IP Controller Operator is a Service Operator which installs F5 BIG-IP Controller (Container Ingress Services) on OpenShift platforms 4.x. Prerequisites OpenShift 4.x BIG-IP (F5 CIS supported versions) In this document we will use Code Ready Containers to install, Configure and deploy CIS using F5 BIG-IP Controller Operator. CRC 1.7.0 installs OCP 4.3.1 on you laptop. Get your suitable image from CRC Repo and follow the instructions to install CRC and bringup your single node OCP 4.3.1 cluster. Install, Configure and Deploy CIS using Operator Accessing OCP 4.3.1 web console From CLI, login as admin using CRC given credentials. $ eval $(crc oc-env) $ oc login -u kubeadmin -p db9Dr-J2csc-8oP78-9sbmf https://api.crc.testing:6443 Here, the username is 'kubeadmin'. and password is 'db9Dr-J2csc-8oP78-9sbmf' to login OCP web console. Installing Operator From the left Menu bar, access Operator Hub and search for "f5" to see the Certified F5 BIG-IP controller Operator in the listing as below. Click on Install to install this Operator. Installing Operator is a guided process. The below screen shows different options to subscribe for this Operator. Select the highlighted options. Click subscribe. Approval Strategy: Manual: Requires administrator approval to install new updates. Automatic: When a new release is available, updated automatic. (default) When Operator is Subscribed, Operator is installed based on approval strategy. An Installed Operator screen is as below. Configuring and Deploying F5 BIG-IP Controller Instance Click on "F5 BIG-IP Controller" or "F5BigIPCtlr" under Provided APIs column to create an Instance of F5 BIG-IP Controller. Creating a F5BigIpCtlr instance screen is as shown below. The Screen provides an editor to configure CIS/F5 BIG-IP Controller with required deployment options. A sample Controller deployment configuration is as shown below apiVersion: cis.f5.com/v1 kind: F5BigIpCtlr metadata: name: f5-server namespace: openshift-operators spec: args: manage_routes: true agent: as3 log_level: DEBUG route_vserver_addr: 172.16.1.4 bigip_partition: ocp openshift_sdn_name: /Common/openshift_vxlan bigip_url: 172.16.2.23 log_as3_response: true insecure: true pool-member-type: cluster bigip_login_secret: f5-bigip-ctlr-login image: pullPolicy: Always repo: k8s-bigip-ctlr user: f5networks namespace: kube-system rbac: create: true resources: {} serviceAccount: create: true version: latest Create BIG-IP controller login secret and update the same in above configuration. Update the YAML and click on Create. Based on Namespace and configuration options, CIS is installed. When Operator deploys the controller, we can see the updated YAML of the CustomResource Instance. An example below. Name: f5-server Namespace: openshift-operators Labels: <none> Annotations: <none> API Version: cis.f5.com/v1 Kind: F5BigIpCtlr Metadata: Creation Timestamp: 2020-02-08T00:31:21Z Finalizers: uninstall-helm-release Generation: 1 Resource Version: 245330 Self Link: /apis/cis.f5.com/v1/namespaces/openshift-operators/f5bigipctlrs/f5-server UID: 546d3890-4a0a-11ea-a1cf-0ef0e3c74fbe spec: args: agent: as3 bigip_partition: ocp bigip_url: 172.16.2.23 insecure: true log_as3_response: true log_level: DEBUG manage_routes: true openshift_sdn_name: /Common/openshift_vxlan pool_member_type: cluster route_vserver_addr: 172.16.1.4 bigip_login_secret: f5-bigip-ctlr-login Image: PullPolicy: Always Repo: k8s-bigip-ctlr Tag: latest User: f5networks Namespace: kube-system Rbac: Create: true Resources: Service Account: Create: true Name: <nil> Status: Conditions: Last Transition Time: 2020-02-08T00:31:21Z Status: True Type: Initialized Last Transition Time: 2020-02-08T00:31:23Z Message: F5 BIG-IP controller: f5-server General Controller Documentation: - Kubernetes: http://clouddocs.f5.com/containers/latest/kubernetes/index.html - OpenShift: http://clouddocs.f5.com/containers/latest/openshift/index.html Using Ingress? There's a helm chart for that: - https://github.com/F5Networks/charts/tree/master/src/stable/f5-bigip-ingress Using Routes in OpenShift? No helm chart yet, but we do have great documentation: - http://clouddocs.f5.com/containers/latest/openshift/kctlr-openshift-routes.html Reason: InstallSuccessful Status: True Type: Deployed Deployed Release: Manifest: . . . . . . . . . . We can verify from CLI or GUI. $ oc get pods -n kube-system NAME READY STATUS RESTARTS AGE f5-server-f5-bigip-ctlr-7c77d6846f-z7bhp 1/1 Running 0 112s Congratulations! Your F5 BIG-IP Controller is deployed using F5 BIG-IP Controller Operator. Additional Resources Operator Code: https://github.com/F5Networks/k8s-bigip-ctlr/tree/master/operator Operator Image: https://access.redhat.com/containers/#/registry.connect.redhat.com/f5networks/k8s-bigip-ctlr-operator Known Issues When Custom Resource Instance is created, instance listing doesn’t show Status [1] in the GUI. [1] https://github.com/operator-framework/operator-sdk/issues/24913.2KViews1like2CommentsBIG-IP deployment options with Openshift
NOTE: this article has been superseded by these updated articles: F5 BIG-IP deployment with OpenShift - platform and networking options F5 BIG-IP deployment with OpenShift - publishing application options NOTE: outdated content next This article is meant to be an agnostic overview of the possibilities on how to use BIG-IP with RedHat Openshift: either onprem or in the cloud, either in 1-tier or in 2-tier arrangements, possibly alongside NGINX+. This blog is structured as follows: Introduction BIG-IP platform flexibility: deployment, scalability and multi-tenancy options Openshift networking options BIG-IP networking options 1-tier arrangement 2-tier arrangement Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer Ingress and Route resources, the extensibility problem. Full flexibility & advanced services with AS3 Configmaps. F5 Custom Resource Definitions (CRDs). Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration Conclusion Introduction When using BIG-IP with RedHat Openshift Kubernetes a container component named Container Ingress Services (CIS from now on) is used to plug the BIG-IP APIs with the Kubernetes APIs. When a user configuration is applied or when a status change has occurred in the cluster then CIS automatically updates the configuration in the BIG-IP using the AS3 declarative API. CIS supports IP Address Management (IPAM from now on) by making use of F5 IPAM Controller (FIC from now on), which is deployed as container as well. The FIC IPAM controller can have it's own address database or be connected to an external provider such as Infoblox. It can be seen how these components fit together in the next picture. A single BIG-IP cluster can manage both VM and container workloads in the same cluster and separation between these can be set at administrative level with partitions and at network level with routing domains if required. BIG-IP offers a wide range of options to be used with RedHat Openshift. Often these have been driven by customer's requests. In the next sections we cover these options and the considerations to be taken into account to choose between them. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you wlll be find additional technical details. Please comment below if you have any question about this article. BIG-IP platform flexibility: deployment, scalability and multi-tenancy options First of all, it is needed to clarify that regardless of the deployment option chosen, this is independent of the BIG-IP being an appliance, a scale-out chassis or a Virtual Edition. The configuration is always the same. This platform flexibility also opens the possibilities of using different options of scalability, multi-tenancy, hardware accelerators or HSMs/NetHSMs/SaaS-HSMs to keep secure the SSL/TLS private keys in a FIPS compliant manner. The following options apply to a single BIG-IP cluster: A single BIG-IP cluster can handle several Openshift clusters. This requires at least a CIS instance per Openshift cluster instance. It is also possible that a given CIS instance manages a selected set of namespaces. These namespaces can be specified with a list or a label selector. In the BIG-IP each CIS instance will typically write in a dedicated partition, isolated from other CIS instances. When using AS3 ConfigMaps a single CIS can manage several BIG-IP partitions. As indicated in picture, a single BIG-IP cluster can scale-up horizontally with up to 8 BIG-IP instances, this is referred as Scale-N in BIG-IP documentation. When hard tenant isolation is required, then using a single BIG-IP cluster or a vCMP guest instance should be used. vCMP technology can be found in larger appliances and scale-out chassis. vCMP allows to run several independent BIG-IP instances as guests, allowing to run even different versions of BIG-IP. The guest can get allocated different amounts of hardware resources. In the next picture, guests are shown in different colored bars using several blades (grey bars). Openshift networking options Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and Openshift supports the following: OpenshiftSDN - supported since Openshift 3.x and still the default CNI. It makes use of VXLAN encapsulation. OVNKubernetes - supported since Openshift 4.4. It makes use of Geneve encapsulation. Feature wise these CNIs we can compare them from the next table from the Openshift documentation. Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading, reducing the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs, this is suggested in Openshift's documentation as well, and needs to be set at installation time in the install-config.yaml file, see this link for details. BIG-IP networking options The first thing that needs to be decided is how we want the BIG-IP to access the PODs: do we want that the BIG-IP access the PODs directly or do we want to use the typical arrangement of using a 2-tier Load Balancing with an in-cluster Ingress Controller? Equally important is to decide how we want to do NetOps/DevOps separation. CI/CD pipelines provide a management layer which allow several teams to approve or block changes before committing. We are going to takle how to achieve this separation without such an additional management layer. BIG-IP networking option - 1-tier arrangement In this arrangement, the BIG-IP is able to reach the PODs without any address translation . By only using a 1-tier of Load Balancing (see the next picture) the latency is reduced (potentially also increasing client's session performance). Persistence is handled easily and the PODs can be directly monitored, providing an accurate view of the application's health. As it can be seen in the picture above, in a 1-tier arrangement the BIG-IP is part of the CNI network. This is supported for both OpenshiftSDN and OVNKubernetes CNIs. Configuration for BIG-IP with OpenshiftSDN CNI can be found in clouddocs.f5.com. Currently, when using the OVNKubernetes CNI the hybrid-networking option has to be used. In this later case the Openshift cluster will extend its CNI network towards the BIG-IPs using VXLAN encapsulation instead of Geneve used internally within the Openshift nodes. BIG-IP configuration steps for OVNKubernetes in hybrid mode can be followed in this repository created by F5 PM Engineer Mark Dittmer until this is published in clouddocs.f5.com. With a 1-tier configuration there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. In the next diagram it is proposed a solution for this using the IPAM cotroller. The roles and responsibilities would be as follows: The NetOps team would be responsible of setting up the BIG-IP along its basic configuration, up to the the network connectivity towards the cluster including the CNI overlay. The NetOps team would be also responsible of setting up the IPAM Controller and with it the assignment of the IP addresses for each DevOps team or project. The NetOps team would also setup the CIS instances. Each DevOps team or set of projects would have their own CIS instance which would be fed with IP addresses from the IPAM controller. Each CIS instance would be watching each DevOps or project's namespaces. These namespaces are owned by the different DevOps teams. The CIS configuration will specify the partition in the BIG-IP for the DevOps team or project. The DevOps team, as expected, deploys their own applications and create Kubernetes Service definitions for CIS consumption. Moreover, the DevOps team will also define how the Services will be published. These means creating Ingress, Route or any other CRD definition for publishing the services which are constrained by NetOps-owned IPAM controller and CIS instances. BIG-IP networking option - 2-tier arrangement This is the typical way in which Kubernetes clusters are deployed. When using a 2-tier arrangement the External Load Balancer doesn't need to have awareness of the CNI and points to the NodePort addresses of the Ingress Controller inside the Kubernetes cluster. It is up to the infrastructure how to send the traffic to the Ingress Controllers. A 2-tier arrangement sets a harder line of the demarcation between the NetOps and DevOps teams. This type of arrangement using BIG-IP can be seen next. Most External Load Balancers can only perform L4 functionalities but BIG-IP can perform both L4 and L7 functionalities as we will see in the next sections. Note: the proxy protocol mentioned in the diagram is used to allow persistence based on client's IP in the Ingress Controller, regardless the traffic is sent encrypted or not. Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer This is a Kubernetes built-in mechanism to expose Ingress Controllers in any External Load Balancer. In other words, this method is meant for 2-tier topologies. This mechanism is very feature limited feature and extensibility is done by means of annotations. F5 CIS supports IPAM integration in this resource type. Check this link for all options possible. In general, a problem or limitation with Kubernetes annotations (regardless the resource type) is that annotations are not validated by the Kubernetes API using a chema therefore allowing the customer to set in Kubernetes bad configurations. The recommended practice is to limit annotations to simple configurations. Declarations with complex annotations will tend to silently fail or not behave as expected. Specially in these cases CRDs are recommended. These will be described further down. Ingress and Route resources, the extensibility problem. Kubernetes and Openshift provide the following resource types for publishing L7 routes for HTTP/HTTPS services: Routes: Openshift exclusive, eventually going to be deprecated. Ingress: Kubernetes standard. Although these are simple to use, they are very limited in functionality and more often than not the Ingress Controllers require the use of annotations to agument the functionality. F5 available annotations for Routes can be checked in this link and for Ingress resources in this link. As mentioned previously, complex annotations should be avoided. When publishing L7 routes, annotation's limitations are more evident and CRDs are even more recommended. Route and Ingress resources can be further augmented by means of using the CIS feature named Override AS3 ConfigMap which allows to specify an AS3 declaration and attach it to a Route or Ingress definition. This gives access to use almost all features & modules available in BIG-IP as exhibit in the next picture. Although Override AS3 ConfigMap eliminates the annotations extensibility limitations it shares the problem that these are not validated by the Kubernetes API using the AS3 schema. Instead, it is validated by CIS but note that ConfigMaps are not capable of reporting the status the declaration. Thus the ConfigMap declaration status can only be checked in CIS logs. Override AS3 ConfigMaps declarations are meant to be applied to the all the services published by the CIS instance. In other words, this mechanism is useful to apply a general policy or shared configuration across several services (ie: WAF, APM, elaborated monitoring). Full flexibility and advanced services with AS3 ConfigMap The AS3 ConfigMap option is similar to Override AS3 ConfigMap but it doesn't rely in having a pre-existing Ingress or a Route resource. The whole BIG-IP configuration is setup in the ConfigMap. Using Full AS3 ConfigMaps with the --hubmode CIS option allows to define the services in a DevOps' owned namespaces and the VIP and associated configurations (ie: TLS settings, IP intelligence, WAF policy, etc...) in a namespace owned by the DevOps team. This provides independence between the two teams. Override AS3 ConfigMaps tend to be small because these are just used to patch the Ingress and Route resources. In other words, extending Ingress and Route-generated AS3 configuration. On the other hand, using full AS3 ConfigMaps require creating a large AS3 JSON declaration that Ingress/Route users are not used to. Again, the AS3 definition within the ConfigMap is validated by BIG-IP and not by Kubernetes which is a limitation because the status of the configuration can only be fully checked in CIS logs. F5 Custom Resource Definitions (CRDs) Above we've seen the Kubernetes built-in resource types and their advanced services & flexibility limitations. We've also seen the swiss-army knife that AS3 ConfigMaps are and the limitation of it not being Kubernetes schema-validated. Kubernetes allows API augmentation by allowing Custom Resource Definitions (CRDs) to define new resource types for any functionality needed. F5 has created the following CRDs to provide the easiness of built-in resource types but with greater functionality without requiring annotations. Each CRD is focused in different use cases: IngressLink aims to simplify 2-tier deployments when using BIG-IP and NGINX+. By using IngressLink CRD instead of a Service of type LoadBalancer. At present the IngressLink CRD provides the following features : Proxy Protocol support or other customizations by using iRules. Automatic health check monitoring of NGINX+ readiness port in BIG-IP. It's possible to link with NGINX+ either using NodePort or Cluster mode, in the later case bypassing any kube-proxy/iptables indirection. More to come... When using IngressLink it automatically exposes both ports 443 and port 80 sending the requests to NGINX+ Ingress Controller. TransportServer is meant to expose non-HTTP traffic configuration, it can be any TCP or UDP traffic on any traffic and it offers several controls again, without requiring using annotations. VirtualServer has L7 routes oriented approach analogous to Ingress/Route resources but providing advanced configurations whilst avoiding using annotations or override AS3 ConfigMaps. This can be used either in a 1 tier or 2-tier arrangement as well. In the later case the BIG-IP would take the function of External LoadBalancer of in-cluster Ingress Controllers yet providing advanced L7 services. All these new CRDs support IPAM. Summary of BIG-IP CIS Kubernetes resource types So what resource types should It be used? The next tables try to summarize the features, strengths and usability of them. Easeof use Network topology and overall suitability Comparing CRDs, Ingress/Routes and ConfigMaps Please note that the features of the different resources is continuously changing please check the latest docs for more up to date information. Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration CIS Installation can be performed in different ways: Using Kubernetes resources (named manual in F5 clouddocs) - this approach is the most low level one and allows for ultimate customization. Using Helm chart. This provides life-cycle management of the CIS installation in any Kubernetes cluster. Using CIS Operator. Built on top of the Helm chart it additionally provides Openshift integrated management. In the screenshots below we can see how the Openshift Operator construct allows for automatic download and updates. We can also see the use of the F5BigIpCtlr resource type to configure the different instances At present IPAM controller installation is only done using Kubernetes resources. After these components are created it is needed to create the VxLAN configuration in the BIG-IP, this can be automated using using any of BIG-IP automations, mainly Ansible and Terraform. Conclusion F5 BIG-IPs provides several options for deployment in Openshift with unmatched functionality either used as External Load Balancer as Ingress Controller achieving a single Tier setup. Three components are used for this integrator: The F5 Container Ingress Services (CIS) for plugging the Kubernetes API with BIG-IP. The F5 ConOpenshift Operator for installing and managing CIS. The F5 IPAM controller. Resource types are the API used to define Services or Ingress Controllers publishing in the F5 BIG-IP. These are constantly being updated and it is recommended to check F5 clouddocs for up to date information. We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.3KViews1like3CommentsConnect and Secure OpenShift cloud services with F5 Distributed Cloud
Check out the“OpenShift ANYWHERE with F5 Distributed Cloud” how-to video series. In these first two episodes, we will showcase how easy it is to use #F5 Distributed Cloud to discover services for Azure Red Hat OpenShift (#ARO) and Red Hat OpenShift on AWS (#ROSA), respectively, and make #OpenShift services available in any location, be it Public Clouds, be it Private Clouds, or the edge.1.3KViews3likes0CommentsDeploy OpenShift 4.x with BIG-IP CIS in AWS
OpenShift Container Platform (or OCP) provides theHAProxy template routeras the default plug-in as the ingress point for all external traffic. While this is fine for small scale deployments there are some significant challenges when looking to scale your OCP deployments beyond single cluster, single site deployments. As with any architectural design, we have to consider our desired ‘end state’ architecture.For example: Will your organization deploy applications across clusters as the environment starts to scale? How about agile development methodologies and blue/green A/B deployment scenarios, will the default ADC have the intelligence to automatically direct traffic between production and non-production workloads? How about failover and site resiliency? F5 BIG-IP provides these services using Container Ingress Services or CIS, with a more simplified architecture, to help your organization scale applications and services across clusters and sites. In addition, F5 BIG-IP offers advanced access and security control for the traffic going into or out of an OpenShift cluster, to ensure consistent policy enforcement and end to end compliance in any cloud. In this article, we're going to walk you through a fairly minimumdeployment of OpenShift 4.3 with BIG-IP CIS in Amazon Web Services (AWS). With such, you can enable more complex use cases. So let’s get started. Prerequisites If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Confirm AWS IAM user name that you are using to create OpenShift cluster is granted the AdministratorAccess policy. Make sure you have the access theInfrastructure Providerpage on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Configuring Route53 To install OpenShift Container Platform, the AWS account you use must have a dedicated public hosted zone in your Route53 service. This zone must be authoritative for the domain. The Route53 service provides cluster DNS resolution and name lookup for external connections to the OCP cluster. If you registered domain with Route53, you do not need any further configuration as a hosted zone was automatically created. If you use public domain hosted outside Route53, you would need do the following: Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation. Shared the NS record and SA record with your IT team for adding the entries in DNS. Provision OpenShift cluster Before you install OpenShift Container Platform, download the installation file on a local computer. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. The installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: tar xvf <installation_program>.tar.gz From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a .txt file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Run the installation program: ❯ ./openshift-install create cluster --dir ~/aws-ocp43 --log-level=info ? SSH Public Key /Users/zji/.ssh/id_rsa.pub ? Platform aws ? Region us-west-2 ? Base Domain <mybasedomain> ? Cluster Name cluster1 ? Pull Secret [? for help] ********************************************************************************* INFO Creating infrastructure resources INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster1.mybasedomain:6443... INFO API v1.16.2+f2384e2 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 30m0s for the cluster at https://api.cluster1.mybasedomain:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/zji/aws-ocp43/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster1.mybasedomain INFO Login to the console with user: kubeadmin, password: 00000-00000-00000-00000 What just happened? Let's review what just happened. The above installation program automatically set up the following AWS resources for Red Hat OpenShift environment: A virtual private cloud (VPC) that spans three Availability Zones, with one private and one public subnet in each Availability Zone. An internet gateway to provide internet access to each subnet. An OpenShift master ELB An OpenShift node ELB In the private subnets: Three OpenShift master (including etcd) instances in an Auto Scaling group Three OpenShift node instances in an Auto Scaling group Source: https://aws.amazon.com/quickstart/architecture/openshift/ As an account admin for AWS, you can list all these resources that OpenShift or its installer has created per cluster. ❯ aws resourcegroupstaggingapi get-resources --tag-filters "Key=kubernetes.io/cluster/cluster2-7j2jr" | jq '.ResourceTagMappingList[].ResourceARN' "arn:aws:ec2:us-west-2:877162104333:dhcp-options/dopt-0d8651a54eddb2acb" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-0c4b4d66dbf695655" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-077f8efc0cd8d0b01" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-05001638bc043f0cd" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-03abd4c3fb87a7a7d" ... Logging in to the cluster Next, you can install the CLI in order to interact with OpenShift Container Platform using a command-line interface.You can log in to your cluster as a default system user by exporting the cluster kubeconfig file Export the kubeadmin credentials: $ export KUBECONFIG=<installation_directory>/auth/kubeconfig Verify you can run oc commands successfully using the exported configuration: $ oc whoami kube:admin ❯ oc get node NAMESTATUSROLESAGEVERSION ip-10-0-128-147.us-west-2.compute.internalReadyworker26mv1.16.2+f2384e2 ip-10-0-141-160.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-149-163.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-152-36.us-west-2.compute.internalReadyworker26mv1.16.2+f2384e2 ip-10-0-160-247.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-169-120.us-west-2.compute.internalReadyworker25mv1.16.2+f2384e2 Simplify Load Balancing with BIG-IP By default, OpenShift deployment instantiates the build-inHAProxy template routeras the default router. For OpenShift in AWS, it also deploys an AWS ELB as the frontend L4 load balancer, resulting in a two-layer load balancer architecture as illustrated below. Some patterns insert yet another layer of scalability across clusters. F5 BIG-IP simplifies the architecture with a single layer of load balancer where the BIG-IP is exposed directly to the Internet and also performs L7 routing including SSL off-loading, thus improves performance of apps served from the cluster and scalability of the overall architecture. It also offers additional benefits. You can further reduce latency by adding Advanced WAF, Access Policy control, intelligence traffic management, many more application delivery and security offerings by BIG-IP. Follow the steps to deploy BIG-IP into existing VPC: https://clouddocs.f5.com/cloud/public/v1/aws_index.html Next, you can refer to F5 CIS user guide to deploy and configure CIS for OpenShift. If you deploy BIG-IP CIS as cluster mode, you may implement VXLAN to route the traffic between BIG-IP and OpenShift Cluster. By default, direct access to OpenShift nodes is limited. To support VXLAN traffic from BIG-IP, you want to adjust the OpenShift security group accordingly by exposing additional ports as following: You can verify that F5 BIG-IP CIS is successfully installed: ❯ oc get pods -n kube-system -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES k8s-bigip-ctlr-6664d45f57-cjb8g1/1Running015d10.131.0.46ip-10-0-222-250.us-west-2.compute.internal<none><none> Summary Red Hat provides an excellent foundation for building a production ready OpenShift in AWS environment, BIG-IP CIS can further simplify the architecture and improve performance by converging the 2-tier load balancing into single layer. In addition, BIG-IP can provide advanced application delivery and security features, and we will cover more use cases in the following articles.869Views0likes2CommentsLightboard Lessons: What is OpenShift?
The OpenShift Container Platform from RedHat is a platform as a service leveraging Docker and Kubernetes to provide app developers an easy button for application management, deployment, and scale. In this episode of Lightboard Lessons, Jason Rahm builds on his earlier videos on Docker and Kubernetes to discuss the value-added building blocks that OpenShift brings to the container table. Resources What are Containers? (video) What is Kubernetes? (video) What is Kubernetes? (DevCentral Basics article) Introduction to F5 Container Ingress Services686Views1like0Comments