openshift
13 Topics3 Ways to use F5 BIG-IP with OpenShift 4
F5 BIG-IP can provide key infrastructure and application services in a RedHat OpenShift 4 environment.Examples include providing core load balancing for the OpenShift API and Router, DNS services for the cluster, a supplement or replacement for the OpenShift Router, and security protection for the OpenShift management and application services. #1. Core Services OpenShift 4 requires a method to provide high availability to the OpenShift API (port 6443), MachineConfig (22623), and Router services (80/443).BIG-IP Local Traffic Manager (LTM) can provide these trusted services easily.OpenShift also requires several DNS records that the BIG-IP can provide accelerated responses as a DNS cache and/or providing Global Server Load Balancing of cluster DNS records. Additional documentation about OpenShift 4 Network Requirements (RedHat) Networking Requirements for user-provisioned infrastructure #2 OpenShift Router RedHat provides their own OpenShift Router for L7 load balancing, but the F5 BIG-IP can also provide these services using Container Ingress Services.Instead of deploying load balancing resources on the same nodes that are hosting OpenShift workloads; F5 BIG-IP provides these services outside of the cluster on either hardware or Virtual Edition platforms.Container Ingress Services can run either as an auxiliary router to the included router or a replacement. Additional articles that are related to Container Ingress Services • Using F5 BIG-IP Controller for OpenShift #3 Security F5 can help filter, authenticate, and validate requests that are going into or out of an OpenShift cluster.LTM can be used to host sensitive SSL resources outside of the cluster (including on a hardware HSM if necessary) as well as filtering of requests (i.e. disallow requests to internal resources like the management console).Advanced Web Application Firewall (AWAF) policies can be deployed to stymie bad actors from reaching sensitive applications.Access Policy Manager can provide OpenID Connect services for the OpenShift management console and help with providing identity services for applications and microservices that are running on OpenShift (i.e. converting BasicAuth request into a JWT token for a microservice). Additional documentation related to attaching a security policy to an OpenShift Route • AS3 Override Where Can I Try This? The environment that was used to write this article and create the companion video can be found at: https://github.com/f5devcentral/f5-k8s-demo/tree/ocp4/ocp4. For folks that are part of F5 you can access this in our Unified Demo Framework and can schedule labs with customers/partners (search for "OpenShift 4.3 with CIS"). I plan on publishing a version of this demo environment that can run natively in AWS. Check back to this article for any updates. Thanks!8.4KViews6likes3CommentsDeploying NGINX Ingress Controller with OpenShift on AWS Managed Service: ROSA
Introduction In March 2021, Amazon and Red Hat announced the General Availability of Red Hat OpenShift Service on AWS (ROSA). ROSA is a fully-managed OpenShift service, jointly managed and supported by both Red Hat and Amazon Web Services (AWS). OpenShift offers users several different deployment models. For customers that require a high degree of customization and have the skill sets to manage their environment, they can build and manage OpenShift Container Platform (OCP) on AWS. For those who want to alleviate the complexity in managing the environment and focus on their applications, they can consume OpenShift as a service, or Red Hat OpenShift Service on AWS (ROSA). The benefits of ROSA are two-fold. First, we can enjoy more simplified Kubernetes cluster creation using the familiar Red Hat OpenShift console, features, and tooling without the burden of manually scaling and managing the underlying infrastructure. Secondly,the managed service made easier with joint billing, support, and out-of-the-box integration to AWS infrastructure and services. In this article, I am exploring how to deploy an environment with NGINX Ingress Controller integrated into ROSA. Deploy Red Hat OpenShift Service on AWS (ROSA) The ROSA service may be deployed directly from the AWS console.Red Hat has done a great job in providing the instructions on creating a ROSA cluster in the Installation Guide. The guide documents the AWS prerequisites, required AWS service quotas, and configuration of your AWS accounts. We run the following commands to ensure that the prerequisites are met before installing ROSA. -Verify that my AWS account has the necessary permissions: ❯ rosa verify permissions I: Validating SCP policies... I: AWS SCP policies ok -Verify that my AWS account has the necessary quota to deploy a Red Hat OpenShift Service on the AWS cluster. ❯ rosa verify quota --region=us-west-2 I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html Next, I ran the following command to prepare my AWS account for cluster deployment: ❯ rosa init I: Logged in as 'ericji' on 'https://api.openshift.com' I: Validating AWS credentials... I: AWS credentials are valid! I: Validating SCP policies... I: AWS SCP policies ok I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html I: Ensuring cluster administrator user 'osdCcsAdmin'... I: Admin user 'osdCcsAdmin' created successfully! I: Validating SCP policies for 'osdCcsAdmin'... I: AWS SCP policies ok I: Validating cluster creation... I: Cluster creation valid I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.7.19 If we were to follow their instructions to create a ROSA cluster using therosaCLI, after about 35 minutes our deployment would produce a Red Hat OpenShift cluster along with the needed AWS components. ❯ rosa create cluster --cluster-name=eric-rosa I: Creating cluster 'eric-rosa' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'eric-rosa' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c eric-rosa'. I: To watch your cluster installation logs, run 'rosa logs install -c eric-rosa --watch'. Name:eric-rosa … During the deployment, we may enter the following command to follow the OpenShift installer logs to track the progress of our cluster: > rosa logs install -c eric-rosa --watch After the Red Hat OpenShift Service on AWS (ROSA) cluster is created, we must configure identity providers to determine how users log in to access the cluster. What just happened? Let's review what just happened. The above installation programautomatically set up the following AWS resources for the ROSA environment: AWS VPC subnets per Availability Zone (AZ). For single AZ implementations two subnets were created (one public one private) The multi-AZ implementation would make use of three Availability Zones, with a public and private subnet in each AZ (a total of six subnets). OpenShift cluster nodes (or EC2 instances) Three Master nodes were created to cater for cluster quorum and to ensure proper fail-over and resilience of OpenShift. At least two infrastructure nodes, catering for build-in OpenShift container registry, OpenShift router layer, and monitoring. Multi-AZ implementations Three Master nodes and three infrastructure nodes spread across three AZs Assuming that application workloads will also be running in all three AZs for resilience, this will deploy three Workers. This will translate to a minimum of nine EC2 instances running within the customer account. A collection ofAWS Elastic Load Balancers, some of these Load balancers will provide end-user access to the application workloads running on OpenShift via the OpenShift router layer, other AWS elastic load balancers will expose endpoints used for cluster administration and management by the SRE teams. Source:https://aws.amazon.com/blogs/containers/red-hat-openshift-service-on-aws-architecture-and-networking/ Deploy NGINX Ingress Controller The NGINX Ingress Operator is a supported and certified mechanism for deployingNGINX Ingress Controller in an OpenShift environment, with point-and-click installation and automatic upgrades. It works for both the NGINX Open Source-basedand NGINX Plus-basededitionsof NGINX Ingress Controller. In thistutorial, I’ll bedeploying theNGINX Plus-based edition.ReadWhy You Need an Enterprise-Grade Ingress Controller on OpenShiftforuse casesthat merit the use of this edition.If you’re not sure how these editions are different, readWait, Which NGINX Ingress Controller for Kubernetes Am I Using? Iinstall the NGINX Ingress Operator from the OpenShift console.There are numerous options you can set when configuring the NGINX Ingress Controller, as listedinour GitHubrepo.Here is a manifestexample: apiVersion: k8s.nginx.org/v1alpha1 kind: NginxIngressController metadata: name: my-nginx-ingress-controller namespace: openshift-operators spec: ingressClass: nginx serviceType: LoadBalancer nginxPlus: true type: deployment image: pullPolicy: Always repository: ericzji/nginx-plus-ingress tag: 1.12.0 To verify the deployment, run the following commands in a terminal. As shown in the output, the manifest I used in the previous step deployed two replicas of the NGINX Ingress Controller and exposed them with aLoadBalancerservice. ❯ oc get pods -n openshift-operators NAMEREADYSTATUSRESTARTSAGE my-nginx-ingress-controller-b556f8bb-bsn4k1/1Running014m nginx-ingress-operator-controller-manager-7844f95d5f-pfczr2/2Running03d5h ❯ oc get svc -n openshift-operators NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE my-nginx-ingress-controllerLoadBalancer172.30.171.237a2b3679e50d36446d99d105d5a76d17f-1690020410.us-west-2.elb.amazonaws.com80:30860/TCP,443:30143/TCP25h nginx-ingress-operator-controller-manager-metrics-serviceClusterIP172.30.50.231<none> With NGINX Ingress Controller deployed, we'll have an environment that looks like this: Post-deployment verification After the ROSA cluster was configured, I deployed an app (Hipster) in OpenShift that is exposed by NGINX Ingress Controller (by creating an Ingress resource). To use a custom hostname,it requires that we manually change your DNS record on the Internetto point to the IP address value of AWS Elastic Load Balancer. ❯ dig +short a2dc51124360841468c684127c4a8c13-808882247.us-west-2.elb.amazonaws.com 34.209.171.103 52.39.87.162 35.164.231.54 I made this DNS change (optionally, use a local host record), and we will see my demo app available on the Internet, like this: Deleting your environment To avoid unexpected charges, don't forget to delete your environment if you no longer need it. ❯ rosa delete cluster -c eric-rosa --watch ? Are you sure you want to delete cluster eric-rosa? Yes I: Cluster 'eric-rosa' will start uninstalling now W: Logs for cluster 'eric-rosa' are not available … Conclusion To summarize, ROSA allows infrastructure and security teams to accelerate the deployment of the Red Hat OpenShift Service on AWS. Integration with NGINX Ingress Controller provides comprehensive L4-L7 security services for the application workloads running on Red Hat OpenShift Service on AWS (ROSA). As a developer, having your clusters as well as security services maintained by this service gives you the freedom to focus on deploying applications. You have two options for gettingstarted with NGINX Ingress Controller: Download the NGINX Open Source-based version of NGINX Ingress Controller fromour GitHub repo. If you prefer to bring your own license to AWS,get a free trialdirectly from F5 NGINX.5KViews0likes0CommentsBIG-IP deployment options with Openshift
NOTE: this article has been superseded by these updated articles: F5 BIG-IP deployment with OpenShift - platform and networking options F5 BIG-IP deployment with OpenShift - publishing application options NOTE: outdated content next This article is meant to be an agnostic overview of the possibilities on how to use BIG-IP with RedHat Openshift: either onprem or in the cloud, either in 1-tier or in 2-tier arrangements, possibly alongside NGINX+. This blog is structured as follows: Introduction BIG-IP platform flexibility: deployment, scalability and multi-tenancy options Openshift networking options BIG-IP networking options 1-tier arrangement 2-tier arrangement Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer Ingress and Route resources, the extensibility problem. Full flexibility & advanced services with AS3 Configmaps. F5 Custom Resource Definitions (CRDs). Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration Conclusion Introduction When using BIG-IP with RedHat Openshift Kubernetes a container component named Container Ingress Services (CIS from now on) is used to plug the BIG-IP APIs with the Kubernetes APIs. When a user configuration is applied or when a status change has occurred in the cluster then CIS automatically updates the configuration in the BIG-IP using the AS3 declarative API. CIS supports IP Address Management (IPAM from now on) by making use of F5 IPAM Controller (FIC from now on), which is deployed as container as well. The FIC IPAM controller can have it's own address database or be connected to an external provider such as Infoblox. It can be seen how these components fit together in the next picture. A single BIG-IP cluster can manage both VM and container workloads in the same cluster and separation between these can be set at administrative level with partitions and at network level with routing domains if required. BIG-IP offers a wide range of options to be used with RedHat Openshift. Often these have been driven by customer's requests. In the next sections we cover these options and the considerations to be taken into account to choose between them. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you wlll be find additional technical details. Please comment below if you have any question about this article. BIG-IP platform flexibility: deployment, scalability and multi-tenancy options First of all, it is needed to clarify that regardless of the deployment option chosen, this is independent of the BIG-IP being an appliance, a scale-out chassis or a Virtual Edition. The configuration is always the same. This platform flexibility also opens the possibilities of using different options of scalability, multi-tenancy, hardware accelerators or HSMs/NetHSMs/SaaS-HSMs to keep secure the SSL/TLS private keys in a FIPS compliant manner. The following options apply to a single BIG-IP cluster: A single BIG-IP cluster can handle several Openshift clusters. This requires at least a CIS instance per Openshift cluster instance. It is also possible that a given CIS instance manages a selected set of namespaces. These namespaces can be specified with a list or a label selector. In the BIG-IP each CIS instance will typically write in a dedicated partition, isolated from other CIS instances. When using AS3 ConfigMaps a single CIS can manage several BIG-IP partitions. As indicated in picture, a single BIG-IP cluster can scale-up horizontally with up to 8 BIG-IP instances, this is referred as Scale-N in BIG-IP documentation. When hard tenant isolation is required, then using a single BIG-IP cluster or a vCMP guest instance should be used. vCMP technology can be found in larger appliances and scale-out chassis. vCMP allows to run several independent BIG-IP instances as guests, allowing to run even different versions of BIG-IP. The guest can get allocated different amounts of hardware resources. In the next picture, guests are shown in different colored bars using several blades (grey bars). Openshift networking options Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and Openshift supports the following: OpenshiftSDN - supported since Openshift 3.x and still the default CNI. It makes use of VXLAN encapsulation. OVNKubernetes - supported since Openshift 4.4. It makes use of Geneve encapsulation. Feature wise these CNIs we can compare them from the next table from the Openshift documentation. Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading, reducing the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs, this is suggested in Openshift's documentation as well, and needs to be set at installation time in the install-config.yaml file, see this link for details. BIG-IP networking options The first thing that needs to be decided is how we want the BIG-IP to access the PODs: do we want that the BIG-IP access the PODs directly or do we want to use the typical arrangement of using a 2-tier Load Balancing with an in-cluster Ingress Controller? Equally important is to decide how we want to do NetOps/DevOps separation. CI/CD pipelines provide a management layer which allow several teams to approve or block changes before committing. We are going to takle how to achieve this separation without such an additional management layer. BIG-IP networking option - 1-tier arrangement In this arrangement, the BIG-IP is able to reach the PODs without any address translation . By only using a 1-tier of Load Balancing (see the next picture) the latency is reduced (potentially also increasing client's session performance). Persistence is handled easily and the PODs can be directly monitored, providing an accurate view of the application's health. As it can be seen in the picture above, in a 1-tier arrangement the BIG-IP is part of the CNI network. This is supported for both OpenshiftSDN and OVNKubernetes CNIs. Configuration for BIG-IP with OpenshiftSDN CNI can be found in clouddocs.f5.com. Currently, when using the OVNKubernetes CNI the hybrid-networking option has to be used. In this later case the Openshift cluster will extend its CNI network towards the BIG-IPs using VXLAN encapsulation instead of Geneve used internally within the Openshift nodes. BIG-IP configuration steps for OVNKubernetes in hybrid mode can be followed in this repository created by F5 PM Engineer Mark Dittmer until this is published in clouddocs.f5.com. With a 1-tier configuration there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. In the next diagram it is proposed a solution for this using the IPAM cotroller. The roles and responsibilities would be as follows: The NetOps team would be responsible of setting up the BIG-IP along its basic configuration, up to the the network connectivity towards the cluster including the CNI overlay. The NetOps team would be also responsible of setting up the IPAM Controller and with it the assignment of the IP addresses for each DevOps team or project. The NetOps team would also setup the CIS instances. Each DevOps team or set of projects would have their own CIS instance which would be fed with IP addresses from the IPAM controller. Each CIS instance would be watching each DevOps or project's namespaces. These namespaces are owned by the different DevOps teams. The CIS configuration will specify the partition in the BIG-IP for the DevOps team or project. The DevOps team, as expected, deploys their own applications and create Kubernetes Service definitions for CIS consumption. Moreover, the DevOps team will also define how the Services will be published. These means creating Ingress, Route or any other CRD definition for publishing the services which are constrained by NetOps-owned IPAM controller and CIS instances. BIG-IP networking option - 2-tier arrangement This is the typical way in which Kubernetes clusters are deployed. When using a 2-tier arrangement the External Load Balancer doesn't need to have awareness of the CNI and points to the NodePort addresses of the Ingress Controller inside the Kubernetes cluster. It is up to the infrastructure how to send the traffic to the Ingress Controllers. A 2-tier arrangement sets a harder line of the demarcation between the NetOps and DevOps teams. This type of arrangement using BIG-IP can be seen next. Most External Load Balancers can only perform L4 functionalities but BIG-IP can perform both L4 and L7 functionalities as we will see in the next sections. Note: the proxy protocol mentioned in the diagram is used to allow persistence based on client's IP in the Ingress Controller, regardless the traffic is sent encrypted or not. Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer This is a Kubernetes built-in mechanism to expose Ingress Controllers in any External Load Balancer. In other words, this method is meant for 2-tier topologies. This mechanism is very feature limited feature and extensibility is done by means of annotations. F5 CIS supports IPAM integration in this resource type. Check this link for all options possible. In general, a problem or limitation with Kubernetes annotations (regardless the resource type) is that annotations are not validated by the Kubernetes API using a chema therefore allowing the customer to set in Kubernetes bad configurations. The recommended practice is to limit annotations to simple configurations. Declarations with complex annotations will tend to silently fail or not behave as expected. Specially in these cases CRDs are recommended. These will be described further down. Ingress and Route resources, the extensibility problem. Kubernetes and Openshift provide the following resource types for publishing L7 routes for HTTP/HTTPS services: Routes: Openshift exclusive, eventually going to be deprecated. Ingress: Kubernetes standard. Although these are simple to use, they are very limited in functionality and more often than not the Ingress Controllers require the use of annotations to agument the functionality. F5 available annotations for Routes can be checked in this link and for Ingress resources in this link. As mentioned previously, complex annotations should be avoided. When publishing L7 routes, annotation's limitations are more evident and CRDs are even more recommended. Route and Ingress resources can be further augmented by means of using the CIS feature named Override AS3 ConfigMap which allows to specify an AS3 declaration and attach it to a Route or Ingress definition. This gives access to use almost all features & modules available in BIG-IP as exhibit in the next picture. Although Override AS3 ConfigMap eliminates the annotations extensibility limitations it shares the problem that these are not validated by the Kubernetes API using the AS3 schema. Instead, it is validated by CIS but note that ConfigMaps are not capable of reporting the status the declaration. Thus the ConfigMap declaration status can only be checked in CIS logs. Override AS3 ConfigMaps declarations are meant to be applied to the all the services published by the CIS instance. In other words, this mechanism is useful to apply a general policy or shared configuration across several services (ie: WAF, APM, elaborated monitoring). Full flexibility and advanced services with AS3 ConfigMap The AS3 ConfigMap option is similar to Override AS3 ConfigMap but it doesn't rely in having a pre-existing Ingress or a Route resource. The whole BIG-IP configuration is setup in the ConfigMap. Using Full AS3 ConfigMaps with the --hubmode CIS option allows to define the services in a DevOps' owned namespaces and the VIP and associated configurations (ie: TLS settings, IP intelligence, WAF policy, etc...) in a namespace owned by the DevOps team. This provides independence between the two teams. Override AS3 ConfigMaps tend to be small because these are just used to patch the Ingress and Route resources. In other words, extending Ingress and Route-generated AS3 configuration. On the other hand, using full AS3 ConfigMaps require creating a large AS3 JSON declaration that Ingress/Route users are not used to. Again, the AS3 definition within the ConfigMap is validated by BIG-IP and not by Kubernetes which is a limitation because the status of the configuration can only be fully checked in CIS logs. F5 Custom Resource Definitions (CRDs) Above we've seen the Kubernetes built-in resource types and their advanced services & flexibility limitations. We've also seen the swiss-army knife that AS3 ConfigMaps are and the limitation of it not being Kubernetes schema-validated. Kubernetes allows API augmentation by allowing Custom Resource Definitions (CRDs) to define new resource types for any functionality needed. F5 has created the following CRDs to provide the easiness of built-in resource types but with greater functionality without requiring annotations. Each CRD is focused in different use cases: IngressLink aims to simplify 2-tier deployments when using BIG-IP and NGINX+. By using IngressLink CRD instead of a Service of type LoadBalancer. At present the IngressLink CRD provides the following features : Proxy Protocol support or other customizations by using iRules. Automatic health check monitoring of NGINX+ readiness port in BIG-IP. It's possible to link with NGINX+ either using NodePort or Cluster mode, in the later case bypassing any kube-proxy/iptables indirection. More to come... When using IngressLink it automatically exposes both ports 443 and port 80 sending the requests to NGINX+ Ingress Controller. TransportServer is meant to expose non-HTTP traffic configuration, it can be any TCP or UDP traffic on any traffic and it offers several controls again, without requiring using annotations. VirtualServer has L7 routes oriented approach analogous to Ingress/Route resources but providing advanced configurations whilst avoiding using annotations or override AS3 ConfigMaps. This can be used either in a 1 tier or 2-tier arrangement as well. In the later case the BIG-IP would take the function of External LoadBalancer of in-cluster Ingress Controllers yet providing advanced L7 services. All these new CRDs support IPAM. Summary of BIG-IP CIS Kubernetes resource types So what resource types should It be used? The next tables try to summarize the features, strengths and usability of them. Easeof use Network topology and overall suitability Comparing CRDs, Ingress/Routes and ConfigMaps Please note that the features of the different resources is continuously changing please check the latest docs for more up to date information. Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration CIS Installation can be performed in different ways: Using Kubernetes resources (named manual in F5 clouddocs) - this approach is the most low level one and allows for ultimate customization. Using Helm chart. This provides life-cycle management of the CIS installation in any Kubernetes cluster. Using CIS Operator. Built on top of the Helm chart it additionally provides Openshift integrated management. In the screenshots below we can see how the Openshift Operator construct allows for automatic download and updates. We can also see the use of the F5BigIpCtlr resource type to configure the different instances At present IPAM controller installation is only done using Kubernetes resources. After these components are created it is needed to create the VxLAN configuration in the BIG-IP, this can be automated using using any of BIG-IP automations, mainly Ansible and Terraform. Conclusion F5 BIG-IPs provides several options for deployment in Openshift with unmatched functionality either used as External Load Balancer as Ingress Controller achieving a single Tier setup. Three components are used for this integrator: The F5 Container Ingress Services (CIS) for plugging the Kubernetes API with BIG-IP. The F5 ConOpenshift Operator for installing and managing CIS. The F5 IPAM controller. Resource types are the API used to define Services or Ingress Controllers publishing in the F5 BIG-IP. These are constantly being updated and it is recommended to check F5 clouddocs for up to date information. We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.3KViews1like3CommentsMulti-cluster Kubernetes/Openshift with GSLB-TOOL
Overview This is article 1 of 2. GSLB-TOOL is an OSS project around BIG-IP DNS (GTM) and F5 CloudServices’ DNS LB GSLB products to provide GSLB functionality to Openshift/Kubernetes. GSLB-TOOL is a multi-cluster enabler. Doing multi-cluster with GSLB has the following advantages: Cross-cloud. Services are published in a coordinated manner while being hosted in any public cloud or private cloud. High degree of control. Publishing is done based on service name instead of IP address. Traffic is directed to specific data center based on operational decisions such as service load and also allowing canary, blue/green, and A/B deployments across data centers. Stickiness. Regardless the topology changes in the network, clients will be consistently directed to the same data center. IP Intelligence. Clients can be redirected to the desired data center based on client’s location and gather stats for analytics. The use cases covered by GSLB-TOOL are: Multi-cluster deployments Data center load distribution Enhanced customer experience Advanced Blue/Green, A/B and Canary deployment options Disaster Recovery Cluster Migrations Kubernetes <-> Openshift migrations Container's platform version migration. For example, OCP 3.x to 4.x or OCP 4.x to 4.y. GSLB-TOOL is implemented as a set of Ansible scripts and roles that can be used from the cli or from a Continious Delivery tool such as Spinnaker or Argo CD. The tool operates as a glue between the Kubernetes/Openshift API and the GSLB API. GSLB-TOOL uses GIT as source of truth to store its state hence the GSLB state is not in any specific cluster. The next figure shows an schema of it. It is important to emphasize that GSLB-TOOL is cross-vendor as well since it can use any Ingress Controller or Router implementation. In other words, It is not necessary to use BIG-IP or NGINX for this. Moreover, a given cluster can have several Router/Ingress controller instances from difference vendors. This is thanks of only using the Openshift/Kubernetes APIs when inquiring about the container routes deployed Usage To better understand how GSLB-TOOL operates it is important to remark the following characteristics: GSLB-TOOL operates with project/namespace granularity, in a per cluster bases. When operating with a cluster's project/namespace it operates with all the L7 routes of the cluster's project/namespace at once. For example, the following command: $ project-retrieve shop onprem Will retrieve all the L7 routes of the namespace shop from the onprem cluster. Having a cluster/namespace simplifies management and mimics the behavior of RedHat’s Cluster’s Application Migration tool. In the next figure we can see the overal operations of GSLB-TOOL. At the top we can see in bold the name of the clusters (onprem and aws). In the figure these are only Openshift (aka OCP) clusters but it could be any other Kubernetes as well. We can also see two sample project/namespaces (Project A and Project B). Different clusters can have different namespaces as well. There are two types of commands/actions: The project-* commands operate on the Kubernetes/Openshift API and in the source of truth/GIT repository. These commands operate with a project/namespace granularity. GSLB-TOOL doesn't modify your Openshift/K8s cluster, it only performs read-only operations. The gslb-* commands operates on the source of truth/GIT repository and with the GSLB API of choice, either BIG-IP or F5 Cloud Services. These commands operate with all the project/namespaces of all clusters at once either submitting or rolling back the changes in the GSLB backends. When GSLB-TOOL pushes the GSLB configuration either performs all changes or doesn’t perform any. Thanks to the use of GIT the gslb-rollback command easily reverts the configuration if desired. Actually, creating the Backup of the previous step is only useful when using GSLB-TOOL without GIT which is possible too. GSLB-TOOL flexibility GSLB-TOOL has been designed with flexibility in mind. This is reflected in many features it has: It is agnostic of the Router/Ingress Controller implementation. In the same GSLB domain, it supports concurrently vanilla Kubernetes and Openshift clusters. It is possible to have multiple Routers/Ingress Controllers in the same Kubernetes/Openshift cluster. It is possible to define multiple Availability Zones for a given Router/Ingress Controller. It can be easily modified given that it is written in Ansible. Furthermore, the Ansible playbooks make use of template files that can be modified if desired. Multple GSLB backends. At present GSLB-TOOL can use either F5 Cloud Service’s DNS LB (a SaaS offering) or F5 BIG-IP DNS (aka GTM) by simply changing the value of the backend configuration option to either f5aas or bigip. All operations, configuration files, etc… remain the same. At present it is recommended F5 BIG-IP DNS because currently offers better monitoring options. Easiness to PoC. F5 Cloud Service’s DNS LB can be used to test the tool and later on switch to F5 BIG-IP DNS by simply changing the backend configuration option. GSLB-TOOL L7 route inquire/config flexibility It is specially important to have flexibility when configuring the L7 routes in our source of truth. We might be interested in the following scenarios for a given namespace: Homogeneous L7 routes across clusters - In occasions we expect that all clusters have the same L7 routes for a given namespace. This happens, for example, when all applications are the same in all clusters. Heterogeneous L7 routes across clusters - In occasions we expect that each cluster might have different L7 routes for a given namespace, when there are different versions of the applications (or different applications). This happens, for example, when we are testing new versions of the applications in a cluster and other clusters use the previous version. To handle these scenarios, we have two strategies when populating the routes: project-retrieve – We use the information from the cluster’s Route/Ingress API to populate GSLB. project-populate– We use the information from another cluster’s Route/Ingress API to populate GSLB. The cluster from where we take the L7 routes is referred as the reference cluster. We exemplify these strategies in the following figure where we use a configuration of two clusters (onprem and aws) and a single project/namespace. The L7 routes (either Ingress or Route resources) in these are different: the cluster onprem has two addional L7 routes (/shop and /checkout). We are going to populate our GSLB backend in three different ways: In Example 1, we perform the following actions in sequence: With project-retrieve web onprem we retrieve from the cluster onprem the L7 routes of the project web and these are stored in the Git repository or source of truth. Analogously, with project-retrieve web aws we retrieve from the cluster aws the L7 routes (only one in this case) and these are treieved in the Git repository or source of truth. We submit this configuration into the GSLB backend with gslb-commit. The GSLB backend expects that the onprem cluster has 3 routes and the aws backend 1 route. If the services are available the health check results for both clusters will be Green. Therefore the FQDN will return the IP addresses of both clusters' Routers/Ingress Controllers. In Example 2, we use the project-populate strategy: We perform the same first action as in Example 1. With project-populate web onprem aws we indicate that we expect that the L7 routes defined in onprem are also available in the aws cluster which is not the case. In other words, the onprem cluster is used as the reference cluster for aws. After we submit the configuration in GSLB with gslb-commit, the healthchecks in the onprem cluster will succeed and will fail on aws because /shop and /checkout don't exist (an HTP/404 is returned). Therefore for the FQDN www.f5bddemos.io will return only the IP address of onprem. This will be green automatically, once we update the L7 routes and applications in aws. In Example 3, we use again the project-populate strategy but we use aws are reference cluster. Unlike in the previous examples, with project-retrieve web aws we retrieve the routes from the cluster aws. With project-populate web aws onprem we do the reverse as in step b of the Example 2: we use the aws as reference for onprem instead. After submission of the config with gslb-commit. Given that onprem has the L7 route that aws has, the health checking will succeed. For sake of simplicity, In the examples above it has been shown projects/namespaces with only a single FQDN for all their L7 routes but for a given namespace it is possible to have an arbitrary number of L7 routes and FQDNs. There is no limitation on this either. Additional information If you want to see GSLB-TOOL in practice please check this video.For more information on this tool, please visit the GSLB-TOOL Home Page and it's Wiki Page for additional documentation.1.3KViews0likes0CommentsKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Kubernetes RIGHT NOW!! - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM957Views5likes2CommentsProtecting Critical Apps against EastWest Attack
In the previous article, we explained how NetSecOps and DevSecOps could manage their application security policies to prevent advanced attacks from external organization networks. But in advanced persistent hacking, hackers sometimes exploit application vulnerabilities and use advanced malware with phishing emails to the operators. This is an old technique but still valid and utilized by many APT (Advanced Persistent Threat) Hacking Groups. And if the advanced hackers obtain a DevOps operator's ID and password using the malware, they could access a Kubernetes or OpenShift cluster through the normal login process and easily bypass advanced WAF(Web Application Firewall) solutions deployed in front of the cluster. Once the attacker can get a user ID and password of the Kubernetes or OpenShift cluster, the attacker also can access each application that is running inside of the cluster. Since most people on the SecOps team normally install very basic security functions inside the Kubernetes or OpenShift cluster, the hacker who logged in to the cluster can attack other applications in the same cluster without any security barrier. F5 Container Ingress Service is not designed to stop these sort of attacks within the cluster. To overcome this challenge, we have another tool, NGINX App Protect. NGINX App Protect delivers Layer 7 visibility and granular control for the applications while enabling an advanced application security policies. With an NGINX App Protect deployment, DevSecOps can ensure only legitimate traffic is allowed while all other unwanted traffic is blocked. NGINX App Protect can monitor the traffic traversing namespace boundaries between pods and provide advanced application protection at layer 7 for East-West traffic. Solution Overview This article will cover how NGINX App Protect can protect the critical applications in an OpenShift environment against an attack originating within the same cluster. Detecting advanced application attacks inside the cluster is beneficial for the DevSecOps team but this can increase the complexity of security operations. To provide a certain level of protection for the critical application the NGINX App Protect instance should be installed as a ‘PoD Proxy’ or a ‘Service Proxy’ for the application. This means the customer may need multiple NGINX App Protect instances to have the required level of protection for their applications. On the face of it this might seem like a dramatic increase in the complexity of security related operations. Security automation is the recommended solution to overcome the increased complexity of this security operations challenge. In this use case, we use Red Hat Ansible as our security automation tool. With Red Hat Ansible, the user can automate their incident response process with their existing security solutions. This can dramatically reduce the security team’s response time from hours to minutes. We use Ansible and Elasticsearch to provide all the required ‘security automation’ processes in this demo. With all these combined technologies, the solution provides WAFprotection for the critical applicationsdeployed in the OpenShift cluster. Once it detects the application-based attack from the same cluster subnet, it immediately blocks the attack and deletes the compromised pod with a pre-defined security automation playbook. The workflow is organized as shown below: The malware of 'Phishing email' infects the developer's laptop. The attacker steals the ID/PW of the developer using the malware. In this demo, the stolen ID is 'dev_user.' The attacker logs in the 'Test App' on the 'dev-test01' namespace, owned by the 'dev_user'. The attacker starts the network-scanning process on the internal subnet of the OpenShift cluster. And the attacker finds the 'critical-app' application pod. The attacker starts the web-based attack against 'critical-app'. NGINX App Protect protects the 'critical-app'; thus, the attack traffic is blocked immediately. NGINX exports the alert details to the external Elasticsearch. If this specific alert meets a pre-defined condition, Elasticsearch will trigger the pre-defined Ansible playbook. Ansible playbook accesses OpenShift and deletes the compromised 'Test App’ pod automatically. *Since this demo focuses on an attack inside the OpenShift cluster, the demo does not include the 'Step#1' and 'Step#2' (Phishing email). Understanding of the ‘Security Automation’ process The ‘Security Automation’ is the key part of this demo because the organizations don’t want to respond to each WAF alert manually, one by one. Manual incident-response processes are a time-consuming job and inefficient, especially in a modern-app environment with hundreds of container-based applications. In this demo, Red Hat Ansible and Elasticsearch take the security automation. Below is the brief workflow of the security automation of this use case. In this use case, the F5 Advanced WAF has been deployed in front of the OpenShift cluster and has inserted the X-Forwarded-For header value at each session. Since F5 Advanced WAF inserts the X-Forwarded-For header into the packet that comes from the external, if the packet doesn’t include the X-Forwarded-For header, it is likely coming from the internal network. NGINX App Protectinstalled as a pod proxy’ with the critical application we want to protect. Because NGINX App Protect runs as a pod proxy, all the traffic must be sent through this to reach the ‘critical-application.’ If the NGINX App Protect detects any malicious activities, it sends the alert details to the external Elasticsearch System. When any new alerts come from the NGINX App Protect, Elasticsearch analyzes the details of the alerts. If the alert meets the below conditions, Elasticsearch triggers the notification to the Logstash. If the source IP address of the alert is a part of the OpenShift cluster subnet… If the WAF alert severity is Critical… Once the Logstash system receives the notification from the Elasticsearch, it creates the ip.txt file, which includes the source IP address of the attack and executes the pre-defined Ansible playbook. Ansible playbook reads the ip.txt file and extracts the IP address from the file. And Ansible accesses the OpenShift and finds the compromised pod using that Source IP Address from the ip.txt file. Then Ansible deletes the compromised pod and ip.txt files automatically. Creates Ansible Playbook Red Hat Ansible is the automation tool that enables network and security automation for users with enterprise-ready functions. F5 and Red Hat have a strategic partnership and deliver the joint use cases for our customer base. With Ansible integration with F5 solutions, organizations can have the single pane of glass management for network and security automation. In this use-case, we implement an automated security response process with the Ansible playbook when the F5 NGINX App Protect detects malicious activities in the OpenShift cluster. Below is the Ansible playbook to execute the incident response process for the attacker's compromised pod. ansible_ocp.yaml --- - hosts: localhost gather_facts: false tasks: - name: Login to OCP cluster k8s_auth: host: https://yourocpdomain:6443 username: kubeadmin password: your_ocp_password validate_certs: no register: k8s_auth_result - name: Extract IP Address command: cat /yourpath/ip.txt register: badpod_ip - name: Extract App Label from OpenShift shell: | sudo oc get pods -A -o json --field-selector status.podIP={{ badpod_ip.stdout }} | grep "\"app\":" | awk '{print $2}' | sed 's/,//' register: app_label - name: Delete Malicious Deployments shell: | sudo oc delete all --selector app={{ app_label.stdout }} -A register: delete_pod - name: Delete IP and Info File command: rm -rf /yourpath/ip.txt - name: OCP Service Deletion Completed debug: msg: "{{ delete_pod.stdout }}" Configuring Elasticsearch Watcher and Logstash To trigger the Ansible playbook for the Security Automation, SOC analysts need to validate the alert from the NGINX App Protect first. And based on the difference of the alert details, the SOC analyst might want to execute a different playbook. For example, if the alert is related to a Credential Stuffing Attack, the SOC analysts may want to block the user's application access. But if the alert is related to the known IP Blacklist, the analyst might want to block that IP address in the firewall. To support these requirements, the security team needs to have a tool that can monitor the security alerts and trigger the required actions based on them. Elasticsearch Watcher is the feature of the commercial version of Elasticsearch that users can use to create actions based on conditions, which are periodically evaluated using queries on the data. Configuring the Watcher of Kibana * You need an Elastic Platinum license or Eval license to use this feature on the Kibana. * Go to Kibana UI. * Management -> Watcher -> Create -> Create advanced watcher * Copy and paste below JSON code watcher_ocp.json { "trigger": { "schedule": { "interval": "1m" } }, "input": { "search": { "request": { "search_type": "query_then_fetch", "indices": [ "nginx-*" ], "rest_total_hits_as_int": true, "body": { "query": { "bool": { "must": [ { "match": { "outcome_reason": "SECURITY_WAF_VIOLATION" } }, { "match": { "x_forwarded_for_header_value": "N/A" } }, { "range": { "@timestamp": { "gte": "now-1h", "lte": "now" } } } ] } } } } } }, "condition": { "compare": { "ctx.payload.hits.total": { "gt": 0 } } }, "actions": { "logstash_logging": { "webhook": { "scheme": "http", "host": "localhost", "port": 1234, "method": "post", "path": "/{{watch_id}}", "params": {}, "headers": {}, "body": "{{ctx.payload.hits.hits.0._source.ip_client}}" } }, "logstash_exec": { "webhook": { "scheme": "http", "host": "localhost", "port": 9001, "method": "post", "path": "/{{watch_id}}", "params": {}, "headers": {}, "body": "{{ctx.payload.hits.hits[0].total}}" } } } } 2. Configuring 'logstash.conf' file. Below is the final version of the 'logstash.conf' file. Please note that you have to start the logstash with 'sudo' privilege logstash.conf input { syslog { port => 5003 type => nginx } http { port => 1234 type => watcher1 } http { port => 9001 type => ansible1 } } filter { if [type] == "nginx" { grok { match => { "message" => [ ",attack_type=\"%{DATA:attack_type}\"", ",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"", ",date_time=\"%{DATA:date_time}\"", ",dest_port=\"%{DATA:dest_port}\"", ",ip_client=\"%{DATA:ip_client}\"", ",is_truncated=\"%{DATA:is_truncated}\"", ",method=\"%{DATA:method}\"", ",policy_name=\"%{DATA:policy_name}\"", ",protocol=\"%{DATA:protocol}\"", ",request_status=\"%{DATA:request_status}\"", ",response_code=\"%{DATA:response_code}\"", ",severity=\"%{DATA:severity}\"", ",sig_cves=\"%{DATA:sig_cves}\"", ",sig_ids=\"%{DATA:sig_ids}\"", ",sig_names=\"%{DATA:sig_names}\"", ",sig_set_names=\"%{DATA:sig_set_names}\"", ",src_port=\"%{DATA:src_port}\"", ",sub_violations=\"%{DATA:sub_violations}\"", ",support_id=\"%{DATA:support_id}\"", ",unit_hostname=\"%{DATA:unit_hostname}\"", ",uri=\"%{DATA:uri}\"", ",violation_rating=\"%{DATA:violation_rating}\"", ",vs_name=\"%{DATA:vs_name}\"", ",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\"", ",outcome=\"%{DATA:outcome}\"", ",outcome_reason=\"%{DATA:outcome_reason}\"", ",violations=\"%{DATA:violations}\"", ",violation_details=\"%{DATA:violation_details}\"", ",request=\"%{DATA:request}\"" ] } break_on_match => false } mutate { split => { "attack_type" => "," } split => { "sig_ids" => "," } split => { "sig_names" => "," } split => { "sig_cves" => "," } split => { "sig_set_names" => "," } split => { "threat_campaign_names" => "," } split => { "violations" => "," } split => { "sub_violations" => "," } remove_field => [ "date_time", "message" ] } if [x_forwarded_for_header_value] != "N/A" { mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}} } else { mutate { add_field => { "source_host" => "%{ip_client}"}} } geoip { source => "source_host" database => "/etc/logstash/GeoLite2-City.mmdb" } } } output { if [type] == 'nginx' { elasticsearch { hosts => ["127.0.0.1:9200"] index => "nginx-%{+YYYY.MM.dd}" } } if [type] == 'watcher1' { file { path => "/yourpath/ip.txt" codec => line { format => "%{message}"} } } if [type] == 'ansible1' { exec { command => "ansible-playbook /yourpath/ansible_ocp.yaml" } } } Simulate the demo You should start the Kibana watcher and logstash services first before proceeding with this step. Kubeadmin Console Please make sure you're logged in to the OCP cluster using a cluster-admin account. And confirm the 'critical-app' is running correctly. j.lee$ oc whoami kube:admin j.lee$ j.lee$ oc get projects NAME DISPLAY NAME STATUS critical-app Active default Active dev-test02 Active kube-node-lease Active kube-public Active kube-system Active openshift Active openshift-apiserver Active openshift-apiserver-operator Active openshift-authentication Active openshift-authentication-operator Active openshift-cloud-credential-operator Active j.lee$ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES critical-app-v1-5c6546765f-wjhl9 2/2 Running 1 85m 10.129.2.71 ip-10-0-180-68.ap-southeast-1.compute.internal <none> <none> j.lee$ dev_user Console Please make sure you're logged in to the OCP cluster using 'dev_user' account on the compromised pod and confirm the 'dev-test-app' is running correctly. PS C:\Users\ljwca\Documents\ocp> oc whoami dev_user PS C:\Users\ljwca\Documents\ocp> PS C:\Users\ljwca\Documents\ocp> oc get projects NAME DISPLAY NAME STATUS dev-test02 Active PS C:\Users\ljwca\Documents\ocp> PS C:\Users\ljwca\Documents\ocp> oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dev-test-v1-674f467644-t94dc 1/1 Running 0 6s 10.128.2.38 ip-10-0-155-159.ap-southeast-1.compute.internal <none> <none> 2. Login to 'dev-test' container using remote shell command of the OCP PS C:\Users\ljwca\Documents\ocp> oc rsh dev-test-v1-674f467644-t94dc $ $ uname -a Linux dev-test-v1-674f467644-t94dc 4.18.0-193.14.3.el8_2.x86_64 #1 SMP Mon Jul 20 15:02:29 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux 3. Network scanning This step takes 1~2 hours to complete all scanning. $ nmap -sP 10.128.0.0/14 Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-29 17:20 UTC Nmap scan report for ip-10-128-0-1.ap-southeast-1.compute.internal (10.128.0.1) Host is up (0.0025s latency). Nmap scan report for ip-10-128-0-2.ap-southeast-1.compute.internal (10.128.0.2) Host is up (0.0024s latency). Nmap scan report for 10-128-0-3.metrics.openshift-authentication-operator.svc.cluster.local (10.128.0.3) Host is up (0.0023s latency). Nmap scan report for 10-128-0-4.metrics.openshift-kube-scheduler-operator.svc.cluster.local (10.128.0.4) Host is up (0.0027s latency). . . . After completion of the scanning, you will be able to find the 'critical-app' on the list. 4. Application Scanning for the target You can find the open service ports on the target using nmap. $ nmap 10.129.2.71 Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-29 17:23 UTC Nmap scan report for 10-129-2-71.critical-app.critical-app.svc.cluster.local (10.129.2.71) Host is up (0.0012s latency). Not shown: 998 closed ports PORT STATE SERVICE 80/tcp open http 8888/tcp open sun-answerbook Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds $ But you will see the 403 error when you try to access the server using port 80. This happens because the default Apache access control only allows the traffic from the NGINX App Protect. $ curl http://10.129.2.71/ <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access this resource.</p> <hr> <address>Apache/2.4.46 (Debian) Server at 10.129.2.71 Port 80</address> </body></html> $ Now, you can see the response through port 8888. $ curl http://10.129.2.71:8888/ <html> <head> <title> Network Operation Utility - NSLOOKUP </title> </head> <body> <font color=blue size=12>NSLOOKUP TOOL</font><br><br> <h2>Please type the domain name into the below box.</h2> <h1> <form action="/index.php" method="POST"> <p> <label for="target">DNS lookup:</label> <input type="text" id="target" name="target" value="www.f5.com"> <button type="submit" name="form" value="submit">Lookup</button> </p> </form> </h1> <font color=red>This site is vulnerable to Web Exploit. Please use this site as a test purpose only.</font> </body> </html> $ 5. Performing the Command Injection attack. $ curl -d "target=www.f5.com|cat /etc/passwd&form=submit" -X POST http://10.129.2.71:8888/index.php <html><head><title>SRE DevSecOps - East-West Attack Blocking</title></head><body><font color=green size=10>NGINX App Protect Blocking Page</font><br><br>Please consult with your administrator.<br><br>Your support ID is: 878077205548544462<br><br><a href='javascript:history.back();'>[Go Back]</a></body></html>$ $ 6. Verify the logs in Kibana dashboard You should be able to see the NGINX App Protect alerts on your Elasticsearch. You should be able to see the NGINX App Protect alerts on your ELK. 7. Verify the Ansible terminates the compromised pod Ansible deletes the compromised pod. Summary Today’s cyber based threats are getting more and more sophisticated. Attackers keep attempting to find out the weakest link in the company’s infrastructure and finally move from there to the data in the company using that link. In most cases, the weakest link of the organization is the human and the company stores its critical data in the application. This is why the attackers use the phishing email to compromise the user’s laptop and leverage it to access the application. While F5 is working very closely with our key alliance partners such as Cisco and FireEye to stop the advanced malware at the first stage, our NGINX App Protect can work as another layer of defence for the application to protect the organization's data. F5, Red Hat, and Elastic have developed this new protection mechanism, which is an automated process. This use case allows the DevSecOps team to easily deploy the advanced security layer in their OpenShift cluster. If you want tolearn moreabout this use case, please visit the F5 Business Development official Github linkhere.899Views0likes0CommentsDeploy OpenShift 4.x with BIG-IP CIS in AWS
OpenShift Container Platform (or OCP) provides theHAProxy template routeras the default plug-in as the ingress point for all external traffic. While this is fine for small scale deployments there are some significant challenges when looking to scale your OCP deployments beyond single cluster, single site deployments. As with any architectural design, we have to consider our desired ‘end state’ architecture.For example: Will your organization deploy applications across clusters as the environment starts to scale? How about agile development methodologies and blue/green A/B deployment scenarios, will the default ADC have the intelligence to automatically direct traffic between production and non-production workloads? How about failover and site resiliency? F5 BIG-IP provides these services using Container Ingress Services or CIS, with a more simplified architecture, to help your organization scale applications and services across clusters and sites. In addition, F5 BIG-IP offers advanced access and security control for the traffic going into or out of an OpenShift cluster, to ensure consistent policy enforcement and end to end compliance in any cloud. In this article, we're going to walk you through a fairly minimumdeployment of OpenShift 4.3 with BIG-IP CIS in Amazon Web Services (AWS). With such, you can enable more complex use cases. So let’s get started. Prerequisites If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Confirm AWS IAM user name that you are using to create OpenShift cluster is granted the AdministratorAccess policy. Make sure you have the access theInfrastructure Providerpage on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Configuring Route53 To install OpenShift Container Platform, the AWS account you use must have a dedicated public hosted zone in your Route53 service. This zone must be authoritative for the domain. The Route53 service provides cluster DNS resolution and name lookup for external connections to the OCP cluster. If you registered domain with Route53, you do not need any further configuration as a hosted zone was automatically created. If you use public domain hosted outside Route53, you would need do the following: Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation. Shared the NS record and SA record with your IT team for adding the entries in DNS. Provision OpenShift cluster Before you install OpenShift Container Platform, download the installation file on a local computer. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. The installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: tar xvf <installation_program>.tar.gz From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a .txt file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Run the installation program: ❯ ./openshift-install create cluster --dir ~/aws-ocp43 --log-level=info ? SSH Public Key /Users/zji/.ssh/id_rsa.pub ? Platform aws ? Region us-west-2 ? Base Domain <mybasedomain> ? Cluster Name cluster1 ? Pull Secret [? for help] ********************************************************************************* INFO Creating infrastructure resources INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster1.mybasedomain:6443... INFO API v1.16.2+f2384e2 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 30m0s for the cluster at https://api.cluster1.mybasedomain:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/zji/aws-ocp43/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster1.mybasedomain INFO Login to the console with user: kubeadmin, password: 00000-00000-00000-00000 What just happened? Let's review what just happened. The above installation program automatically set up the following AWS resources for Red Hat OpenShift environment: A virtual private cloud (VPC) that spans three Availability Zones, with one private and one public subnet in each Availability Zone. An internet gateway to provide internet access to each subnet. An OpenShift master ELB An OpenShift node ELB In the private subnets: Three OpenShift master (including etcd) instances in an Auto Scaling group Three OpenShift node instances in an Auto Scaling group Source: https://aws.amazon.com/quickstart/architecture/openshift/ As an account admin for AWS, you can list all these resources that OpenShift or its installer has created per cluster. ❯ aws resourcegroupstaggingapi get-resources --tag-filters "Key=kubernetes.io/cluster/cluster2-7j2jr" | jq '.ResourceTagMappingList[].ResourceARN' "arn:aws:ec2:us-west-2:877162104333:dhcp-options/dopt-0d8651a54eddb2acb" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-0c4b4d66dbf695655" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-077f8efc0cd8d0b01" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-05001638bc043f0cd" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-03abd4c3fb87a7a7d" ... Logging in to the cluster Next, you can install the CLI in order to interact with OpenShift Container Platform using a command-line interface.You can log in to your cluster as a default system user by exporting the cluster kubeconfig file Export the kubeadmin credentials: $ export KUBECONFIG=<installation_directory>/auth/kubeconfig Verify you can run oc commands successfully using the exported configuration: $ oc whoami kube:admin ❯ oc get node NAMESTATUSROLESAGEVERSION ip-10-0-128-147.us-west-2.compute.internalReadyworker26mv1.16.2+f2384e2 ip-10-0-141-160.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-149-163.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-152-36.us-west-2.compute.internalReadyworker26mv1.16.2+f2384e2 ip-10-0-160-247.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-169-120.us-west-2.compute.internalReadyworker25mv1.16.2+f2384e2 Simplify Load Balancing with BIG-IP By default, OpenShift deployment instantiates the build-inHAProxy template routeras the default router. For OpenShift in AWS, it also deploys an AWS ELB as the frontend L4 load balancer, resulting in a two-layer load balancer architecture as illustrated below. Some patterns insert yet another layer of scalability across clusters. F5 BIG-IP simplifies the architecture with a single layer of load balancer where the BIG-IP is exposed directly to the Internet and also performs L7 routing including SSL off-loading, thus improves performance of apps served from the cluster and scalability of the overall architecture. It also offers additional benefits. You can further reduce latency by adding Advanced WAF, Access Policy control, intelligence traffic management, many more application delivery and security offerings by BIG-IP. Follow the steps to deploy BIG-IP into existing VPC: https://clouddocs.f5.com/cloud/public/v1/aws_index.html Next, you can refer to F5 CIS user guide to deploy and configure CIS for OpenShift. If you deploy BIG-IP CIS as cluster mode, you may implement VXLAN to route the traffic between BIG-IP and OpenShift Cluster. By default, direct access to OpenShift nodes is limited. To support VXLAN traffic from BIG-IP, you want to adjust the OpenShift security group accordingly by exposing additional ports as following: You can verify that F5 BIG-IP CIS is successfully installed: ❯ oc get pods -n kube-system -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES k8s-bigip-ctlr-6664d45f57-cjb8g1/1Running015d10.131.0.46ip-10-0-222-250.us-west-2.compute.internal<none><none> Summary Red Hat provides an excellent foundation for building a production ready OpenShift in AWS environment, BIG-IP CIS can further simplify the architecture and improve performance by converging the 2-tier load balancing into single layer. In addition, BIG-IP can provide advanced application delivery and security features, and we will cover more use cases in the following articles.869Views0likes2CommentsAn example of an AS3 Rest API call to create a GSLB configuration on BIG-IP.
Hi everyone, Below you can find an example of an AS3 Rest API call that creates a simple GSLB configuration on BIG-IP devices. The main purpose of this article is to share this configuration with others. Of course, on different sites (github, etc) you can find different bits of data, but I think this example will be useful, because it contains all the necessary information about how to create different GSLB objects at the same time, such as: Data Centers (DCs), Servers, Virtual Servers (VSs), Wide IPs, pools and more over. { "class": "AS3", "declaration": { "class": "ADC", "schemaVersion": "3.21.0", "id": "GSLB_test", "Common": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "DC1": { "class": "GSLB_Data_Center" }, "DC2": { "class": "GSLB_Data_Center" }, "device01": { "class": "GSLB_Server", "dataCenter": { "use": "DC1" }, "virtualServers": [ { "name": "/ocp/Shared/ingress_vs_1_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "device02": { "class": "GSLB_Server", "dataCenter": { "use": "DC2" }, "virtualServers": [ { "name": "/ocp2/Shared/ingress_vs_2_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "dns_listener": { "class": "Service_UDP", "virtualPort": 53, "virtualAddresses": [ "A.B.C.D" ], "profileUDP": { "use": "custom_udp" }, "profileDNS": { "use": "custom_dns" } }, "custom_dns": { "class": "DNS_Profile", "remark": "DNS Profile test", "parentProfile": { "bigip": "/Common/dns" } }, "custom_udp": { "class": "UDP_Profile", "datagramLoadBalancing": true }, "testpage_local": { "class": "GSLB_Domain", "domainName": "testpage.local", "resourceRecordType": "A", "pools": [ { "use": "testpage_pool" } ] }, "testpage_pool": { "class": "GSLB_Pool", "resourceRecordType": "A", "members": [ { "server": { "use": "/Common/Shared/device01" }, "virtualServer": "/ocp/Shared/ingress_vs_1_443" }, { "server": { "use": "/Common/Shared/device02" }, "virtualServer": "/ocp2/Shared/ingress_vs_2_443" } ] } } } } } P.S. The AS3 scheme guide was very helpful: https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/refguide/schema-reference.html698Views1like2CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization499Views1like0Comments