CIS
10 TopicsCIS and Kubernetes - Part 1: Install Kubernetes and Calico
Welcome to this series to see how to: Install Kubernetes and Calico (Part 1) Deploy F5 Container Ingress Services (F5 CIS) to tie applications lifecycle to our application services (Part 2) Here is the setup of our lab environment: BIG-IP Version: 15.0.1 Kubernetes component: Ubuntu 18.04 LTM We consider that your BIG-IPs are already setup and running: Licensed and setup as a cluster The networking setup is already done Part 1: Install Kubernetes and Calico Setup our systems before installing kubernetes Step1: Update our systems and install docker To run containers in Pods, Kubernetes uses a container runtime. We will use docker and follow the recommendation provided here As root on ALL Kubernetes components (Master and Node): # Install packages to allow apt to use a repository over HTTPS apt-get -y update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common # Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - # Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # Install Docker CE. apt-get -y update && apt-get install -y docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker We may do a quick test to ensure docker run as expected: docker run hello-world Step2: Setup Kubernetes tools (kubeadm, kubelet and kubectl) To setup Kubernetes, we will leverage the following tools: kubeadm: the command to bootstrap the cluster. kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers. kubectl: the command line util to talk to your cluster. As root on ALL Kubernetes components (Master and Node): curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get -y update We can review which version of kubernetes is supported with F5 Container Ingress Services here At the time of this article, the latest supported version is v1.13.4. We'll make sure to install this specific version with our following step apt-get install -qy kubelet=1.13.4-00 kubeadm=1.13.4-00 kubectl=1.13.4-00 kubernetes-cni=0.6.0-00 apt-mark hold kubelet kubeadm kubectl Install Kubernetes Step1: Setup Kubernetes with kubeadm We will follow the steps provided in the documentation here As root on the MASTER node (make sure to update the api server address to reflect your master node IP): kubeadm init --apiserver-advertise-address=10.1.20.20 --pod-network-cidr=192.168.0.0/16 Note: SAVE somewhere the kubeadm join command. It is needed to "assimilate" the node later. In my example, it looks like the following (YOURS WILL BE DIFFERENT): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 Now you should NOT be ROOT anymore. Go back to your non root user. Since i use Ubuntu, i'll use the default "ubuntu" user Run the following commands as highlighted in the screenshot above: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Step2: Install the networking component of Kubernetes The last step is to setup the network related to our k8s infrastructure. In our kubeadm init command, we used --pod-network-cidr=192.168.0.0/16 in order to be able to setup next on network leveraging Calico as documented here kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml You may monitor the deployment by running the command: kubectl get pods --all-namespaces After some time (<1 min), everything shouldhave a "Running" status. Make sure that CoreDNS started also properly. If everything is up and running, we have our master setup properly and can go to the node to setup k8s on it. Step3: Add the Node to our Kubernetes Cluster Now that the master is setup properly, we can assimilate the node. You need to retrieve the "kubeadmin join …" command that you received at the end of the "kubeadm init …" cmd. You must run the following command as ROOT on the Kubernetes NODE (remember that you got a different hash and token, the command below is an example): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 We can check the status of our node by running the following command on our MASTER (ubuntu user) kubectl get nodes Both component should have a "Ready" status. Last step is to setup Calico between our BIG-IPs and our Kubernetes cluster Setup Calico We need to setup Calico on our BIG-IPs and k8S components. We will setup our environment with the following AS Number: 64512 Step1: BIG-IPs Calico setup F5 has documented this procedure here We will use our self IPs on the internal network. Therefore we need to make sure of the following: The self IP has a portlock down setup to "Allow All" Or add a TCP custom port to the self IP: TCP port 179 You need to allow BGP on the default route domain 0 on your BIG-IPs. Connect to the BIG-IP GUI on go into Network > Route domain. Click on Route Domain "0" and allow BGP Click on "Update" Once this is done,connect via SSH and get into a bash shell on both BIG-IPs Run the following commands: #access the IMI Shell imish #Switch to enable mode enable #Enter configuration mode config terminal #Setup route bgp with AS Number 64512 router bgp 64512 #Create BGP Peer group neighbor calico-k8s peer-group #assign peer group as BGP neighbors neighbor calico-k8s remote-as 64512 #we need to add all the peers: the other BIG-IP, our k8s components neighbor 10.1.20.20 peer-group calico-k8s neighbor 10.1.20.21 peer-group calico-k8s #on BIG-IP1, run neighbor 10.1.20.12 peer-group calico-k8s #on BIG-IP2, run neighbor 10.1.20.11 peer-group calico-k8s #save configuration write #exit end You can review your setup with the command show ip bgp neighbors Note: your other BIG-IP should be identified with a router ID and have a BGP state of "Active". The k8s node won't have a router ID since BGP hasn't already been setup on those nodes. Keep your BIG-IP SSH sessions open. We'll re-use the imish terminal once our k8s components have Calico setup Step2: Kubernetes Calico setup On the MASTER node (not as root), we need to retrieve the calicoctl binary curl -O -Lhttps://github.com/projectcalico/calicoctl/releases/download/v3.10.0/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/local/bin We need to setup calicoctl as explained here sudo mkdir /etc/calico Create a file /etc/calico/calicoctl.cfg with your preferred editor (you'll need sudo privilegies). This file should contain the following apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/ubuntu/config" Note: you may have to change the path specified by the kubeconfig parameter based on the user you use to do kubectl command To make sure that calicoctl is properly setup, run the command calicoctl get nodes You should get a list of your Kubernetes nodes Now we can work on our Calico/BGP configuration as documented here On the MASTER node: cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: true asNumber: 64512 EOF Note: Because we setup nodeToNodeMeshEnabled to True, the k8s node will receive the same config We may now setup our BIG-IP BGP peers. Replace the peerIP Value with the IP of your BIG-IPs cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip1 spec: peerIP: 10.1.20.11 asNumber: 64512 EOF cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip2 spec: peerIP: 10.1.20.12 asNumber: 64512 EOF Review your setup with the command: calicoctl get bgpPeer If you go back to your BIG-IP SSH connections, you may check that your Kubernetes nodes have a router ID now in your BGP configuration: imish show ip bgp neighbors Summary So far we have: Setup Kubernetes Setup Calico between our BIG-IPs and our Kubernetes cluster In the next article, we will setup F5 container Ingress Services (F5 CIS)4.2KViews1like1CommentBIG-IP deployment options with Openshift
NOTE: this article has been superseded by these updated articles: F5 BIG-IP deployment with OpenShift - platform and networking options F5 BIG-IP deployment with OpenShift - publishing application options NOTE: outdated content next This article is meant to be an agnostic overview of the possibilities on how to use BIG-IP with RedHat Openshift: either onprem or in the cloud, either in 1-tier or in 2-tier arrangements, possibly alongside NGINX+. This blog is structured as follows: Introduction BIG-IP platform flexibility: deployment, scalability and multi-tenancy options Openshift networking options BIG-IP networking options 1-tier arrangement 2-tier arrangement Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer Ingress and Route resources, the extensibility problem. Full flexibility & advanced services with AS3 Configmaps. F5 Custom Resource Definitions (CRDs). Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration Conclusion Introduction When using BIG-IP with RedHat Openshift Kubernetes a container component named Container Ingress Services (CIS from now on) is used to plug the BIG-IP APIs with the Kubernetes APIs. When a user configuration is applied or when a status change has occurred in the cluster then CIS automatically updates the configuration in the BIG-IP using the AS3 declarative API. CIS supports IP Address Management (IPAM from now on) by making use of F5 IPAM Controller (FIC from now on), which is deployed as container as well. The FIC IPAM controller can have it's own address database or be connected to an external provider such as Infoblox. It can be seen how these components fit together in the next picture. A single BIG-IP cluster can manage both VM and container workloads in the same cluster and separation between these can be set at administrative level with partitions and at network level with routing domains if required. BIG-IP offers a wide range of options to be used with RedHat Openshift. Often these have been driven by customer's requests. In the next sections we cover these options and the considerations to be taken into account to choose between them. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you wlll be find additional technical details. Please comment below if you have any question about this article. BIG-IP platform flexibility: deployment, scalability and multi-tenancy options First of all, it is needed to clarify that regardless of the deployment option chosen, this is independent of the BIG-IP being an appliance, a scale-out chassis or a Virtual Edition. The configuration is always the same. This platform flexibility also opens the possibilities of using different options of scalability, multi-tenancy, hardware accelerators or HSMs/NetHSMs/SaaS-HSMs to keep secure the SSL/TLS private keys in a FIPS compliant manner. The following options apply to a single BIG-IP cluster: A single BIG-IP cluster can handle several Openshift clusters. This requires at least a CIS instance per Openshift cluster instance. It is also possible that a given CIS instance manages a selected set of namespaces. These namespaces can be specified with a list or a label selector. In the BIG-IP each CIS instance will typically write in a dedicated partition, isolated from other CIS instances. When using AS3 ConfigMaps a single CIS can manage several BIG-IP partitions. As indicated in picture, a single BIG-IP cluster can scale-up horizontally with up to 8 BIG-IP instances, this is referred as Scale-N in BIG-IP documentation. When hard tenant isolation is required, then using a single BIG-IP cluster or a vCMP guest instance should be used. vCMP technology can be found in larger appliances and scale-out chassis. vCMP allows to run several independent BIG-IP instances as guests, allowing to run even different versions of BIG-IP. The guest can get allocated different amounts of hardware resources. In the next picture, guests are shown in different colored bars using several blades (grey bars). Openshift networking options Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and Openshift supports the following: OpenshiftSDN - supported since Openshift 3.x and still the default CNI. It makes use of VXLAN encapsulation. OVNKubernetes - supported since Openshift 4.4. It makes use of Geneve encapsulation. Feature wise these CNIs we can compare them from the next table from the Openshift documentation. Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading, reducing the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs, this is suggested in Openshift's documentation as well, and needs to be set at installation time in the install-config.yaml file, see this link for details. BIG-IP networking options The first thing that needs to be decided is how we want the BIG-IP to access the PODs: do we want that the BIG-IP access the PODs directly or do we want to use the typical arrangement of using a 2-tier Load Balancing with an in-cluster Ingress Controller? Equally important is to decide how we want to do NetOps/DevOps separation. CI/CD pipelines provide a management layer which allow several teams to approve or block changes before committing. We are going to takle how to achieve this separation without such an additional management layer. BIG-IP networking option - 1-tier arrangement In this arrangement, the BIG-IP is able to reach the PODs without any address translation . By only using a 1-tier of Load Balancing (see the next picture) the latency is reduced (potentially also increasing client's session performance). Persistence is handled easily and the PODs can be directly monitored, providing an accurate view of the application's health. As it can be seen in the picture above, in a 1-tier arrangement the BIG-IP is part of the CNI network. This is supported for both OpenshiftSDN and OVNKubernetes CNIs. Configuration for BIG-IP with OpenshiftSDN CNI can be found in clouddocs.f5.com. Currently, when using the OVNKubernetes CNI the hybrid-networking option has to be used. In this later case the Openshift cluster will extend its CNI network towards the BIG-IPs using VXLAN encapsulation instead of Geneve used internally within the Openshift nodes. BIG-IP configuration steps for OVNKubernetes in hybrid mode can be followed in this repository created by F5 PM Engineer Mark Dittmer until this is published in clouddocs.f5.com. With a 1-tier configuration there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. In the next diagram it is proposed a solution for this using the IPAM cotroller. The roles and responsibilities would be as follows: The NetOps team would be responsible of setting up the BIG-IP along its basic configuration, up to the the network connectivity towards the cluster including the CNI overlay. The NetOps team would be also responsible of setting up the IPAM Controller and with it the assignment of the IP addresses for each DevOps team or project. The NetOps team would also setup the CIS instances. Each DevOps team or set of projects would have their own CIS instance which would be fed with IP addresses from the IPAM controller. Each CIS instance would be watching each DevOps or project's namespaces. These namespaces are owned by the different DevOps teams. The CIS configuration will specify the partition in the BIG-IP for the DevOps team or project. The DevOps team, as expected, deploys their own applications and create Kubernetes Service definitions for CIS consumption. Moreover, the DevOps team will also define how the Services will be published. These means creating Ingress, Route or any other CRD definition for publishing the services which are constrained by NetOps-owned IPAM controller and CIS instances. BIG-IP networking option - 2-tier arrangement This is the typical way in which Kubernetes clusters are deployed. When using a 2-tier arrangement the External Load Balancer doesn't need to have awareness of the CNI and points to the NodePort addresses of the Ingress Controller inside the Kubernetes cluster. It is up to the infrastructure how to send the traffic to the Ingress Controllers. A 2-tier arrangement sets a harder line of the demarcation between the NetOps and DevOps teams. This type of arrangement using BIG-IP can be seen next. Most External Load Balancers can only perform L4 functionalities but BIG-IP can perform both L4 and L7 functionalities as we will see in the next sections. Note: the proxy protocol mentioned in the diagram is used to allow persistence based on client's IP in the Ingress Controller, regardless the traffic is sent encrypted or not. Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer This is a Kubernetes built-in mechanism to expose Ingress Controllers in any External Load Balancer. In other words, this method is meant for 2-tier topologies. This mechanism is very feature limited feature and extensibility is done by means of annotations. F5 CIS supports IPAM integration in this resource type. Check this link for all options possible. In general, a problem or limitation with Kubernetes annotations (regardless the resource type) is that annotations are not validated by the Kubernetes API using a chema therefore allowing the customer to set in Kubernetes bad configurations. The recommended practice is to limit annotations to simple configurations. Declarations with complex annotations will tend to silently fail or not behave as expected. Specially in these cases CRDs are recommended. These will be described further down. Ingress and Route resources, the extensibility problem. Kubernetes and Openshift provide the following resource types for publishing L7 routes for HTTP/HTTPS services: Routes: Openshift exclusive, eventually going to be deprecated. Ingress: Kubernetes standard. Although these are simple to use, they are very limited in functionality and more often than not the Ingress Controllers require the use of annotations to agument the functionality. F5 available annotations for Routes can be checked in this link and for Ingress resources in this link. As mentioned previously, complex annotations should be avoided. When publishing L7 routes, annotation's limitations are more evident and CRDs are even more recommended. Route and Ingress resources can be further augmented by means of using the CIS feature named Override AS3 ConfigMap which allows to specify an AS3 declaration and attach it to a Route or Ingress definition. This gives access to use almost all features & modules available in BIG-IP as exhibit in the next picture. Although Override AS3 ConfigMap eliminates the annotations extensibility limitations it shares the problem that these are not validated by the Kubernetes API using the AS3 schema. Instead, it is validated by CIS but note that ConfigMaps are not capable of reporting the status the declaration. Thus the ConfigMap declaration status can only be checked in CIS logs. Override AS3 ConfigMaps declarations are meant to be applied to the all the services published by the CIS instance. In other words, this mechanism is useful to apply a general policy or shared configuration across several services (ie: WAF, APM, elaborated monitoring). Full flexibility and advanced services with AS3 ConfigMap The AS3 ConfigMap option is similar to Override AS3 ConfigMap but it doesn't rely in having a pre-existing Ingress or a Route resource. The whole BIG-IP configuration is setup in the ConfigMap. Using Full AS3 ConfigMaps with the --hubmode CIS option allows to define the services in a DevOps' owned namespaces and the VIP and associated configurations (ie: TLS settings, IP intelligence, WAF policy, etc...) in a namespace owned by the DevOps team. This provides independence between the two teams. Override AS3 ConfigMaps tend to be small because these are just used to patch the Ingress and Route resources. In other words, extending Ingress and Route-generated AS3 configuration. On the other hand, using full AS3 ConfigMaps require creating a large AS3 JSON declaration that Ingress/Route users are not used to. Again, the AS3 definition within the ConfigMap is validated by BIG-IP and not by Kubernetes which is a limitation because the status of the configuration can only be fully checked in CIS logs. F5 Custom Resource Definitions (CRDs) Above we've seen the Kubernetes built-in resource types and their advanced services & flexibility limitations. We've also seen the swiss-army knife that AS3 ConfigMaps are and the limitation of it not being Kubernetes schema-validated. Kubernetes allows API augmentation by allowing Custom Resource Definitions (CRDs) to define new resource types for any functionality needed. F5 has created the following CRDs to provide the easiness of built-in resource types but with greater functionality without requiring annotations. Each CRD is focused in different use cases: IngressLink aims to simplify 2-tier deployments when using BIG-IP and NGINX+. By using IngressLink CRD instead of a Service of type LoadBalancer. At present the IngressLink CRD provides the following features : Proxy Protocol support or other customizations by using iRules. Automatic health check monitoring of NGINX+ readiness port in BIG-IP. It's possible to link with NGINX+ either using NodePort or Cluster mode, in the later case bypassing any kube-proxy/iptables indirection. More to come... When using IngressLink it automatically exposes both ports 443 and port 80 sending the requests to NGINX+ Ingress Controller. TransportServer is meant to expose non-HTTP traffic configuration, it can be any TCP or UDP traffic on any traffic and it offers several controls again, without requiring using annotations. VirtualServer has L7 routes oriented approach analogous to Ingress/Route resources but providing advanced configurations whilst avoiding using annotations or override AS3 ConfigMaps. This can be used either in a 1 tier or 2-tier arrangement as well. In the later case the BIG-IP would take the function of External LoadBalancer of in-cluster Ingress Controllers yet providing advanced L7 services. All these new CRDs support IPAM. Summary of BIG-IP CIS Kubernetes resource types So what resource types should It be used? The next tables try to summarize the features, strengths and usability of them. Easeof use Network topology and overall suitability Comparing CRDs, Ingress/Routes and ConfigMaps Please note that the features of the different resources is continuously changing please check the latest docs for more up to date information. Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration CIS Installation can be performed in different ways: Using Kubernetes resources (named manual in F5 clouddocs) - this approach is the most low level one and allows for ultimate customization. Using Helm chart. This provides life-cycle management of the CIS installation in any Kubernetes cluster. Using CIS Operator. Built on top of the Helm chart it additionally provides Openshift integrated management. In the screenshots below we can see how the Openshift Operator construct allows for automatic download and updates. We can also see the use of the F5BigIpCtlr resource type to configure the different instances At present IPAM controller installation is only done using Kubernetes resources. After these components are created it is needed to create the VxLAN configuration in the BIG-IP, this can be automated using using any of BIG-IP automations, mainly Ansible and Terraform. Conclusion F5 BIG-IPs provides several options for deployment in Openshift with unmatched functionality either used as External Load Balancer as Ingress Controller achieving a single Tier setup. Three components are used for this integrator: The F5 Container Ingress Services (CIS) for plugging the Kubernetes API with BIG-IP. The F5 ConOpenshift Operator for installing and managing CIS. The F5 IPAM controller. Resource types are the API used to define Services or Ingress Controllers publishing in the F5 BIG-IP. These are constantly being updated and it is recommended to check F5 clouddocs for up to date information. We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.3KViews1like3CommentsF5 Kubernetes BIG-IP Controller or CIS not connecting to Azure Big-IP deployment
I have started a POC for the BIG-IP Azure deployments, which deployed successfully and I have accessed and set the password. I've deployed the helm chart for CIS, but the pod fails to start. I've tested connectivity to the Azure BIG-IP deployment from a separate pod in the same namespace and it authenticates and returns correct info. I've validated the Azure BIG-IP creds are properly formatted in a secret and that secret is getting mounted in the CIS pod. Here is the pod log with logging level set to debug: 2021/10/04 21:21:39 [DEBUG] No url in credentials directory, falling back to CLI argument 2021/10/04 21:21:39 [INFO] [INIT] Starting: Container Ingress Services - Version: 2.5.0, BuildInfo: azure-465-1952a80a2165b7fc2d3561795ad09d1eb8615136 2021/10/04 21:21:39 [INFO]TeemServer:product.apis.f5.com 2021/10/04 21:21:39 teemClient:{{CIS-Ecosystem CIS/v2.5.0 df103609-7748-43e4-95a4-6631030e67d0} mmhJU2sCd63BznXAXDh4kxLIyfIMm3Ar product.apis.f5.com} 2021/10/04 21:21:39 [DEBUG] digitalAssetId:950e75d5-7fe0-88bc-eb3c-d654ebb4de47 2021/10/04 21:21:39 [DEBUG] telemetryDatalist:[{"Agent":"as3","ConfigmapsCount":0,"DateOfCISDeploy":"2021-10-04T21:21:39.452535893Z","ExternalDNSCount":0,"IPAMSvcLBCount":0,"IPAMTransportServerCount":0,"IPAMVirtualServerCount":0,"IngressCount":0,"IngressLinkCount":0,"Mode":"cluster","PlatformInfo":"CIS/v2.5.0 K8S/v1.19.11","RoutesCount":0,"RunningInDocker":false,"SDNType":"calico","TransportServerCount":0,"VirtualServerCount":0}] 2021/10/04 21:21:39 [DEBUG] ControllerAsDocker:#{docker} 2021/10/04 21:21:40 Resp Code:204 Status:204 No Content 2021/10/04 21:21:40 [INFO] ConfigWriter started: 0xc000284570 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) writing section name global 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) successfully wrote section (global) 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) writing section name bigip 2021/10/04 21:21:40 [DEBUG] [CCCL] ConfigWriter (0xc000284570) successfully wrote section (bigip) 2021/10/04 21:21:40 [INFO] Started config driver sub-process at pid: 21 2021/10/04 21:21:40 [DEBUG] [INIT] Invalid trusted-certs-cfgmap option provided. 2021/10/04 21:21:40 [INFO] [INIT] Creating Agent for as3 2021/10/04 21:21:40 [DEBUG] [CORE] Agent Response Worker started and blocked on channel 0xc0004e04e0 2021/10/04 21:21:40 [INFO] [AS3] Initializing AS3 Agent 2021/10/04 21:21:41 [DEBUG] [AS3] No certs appended, using only system certs 2021/10/04 21:21:41 [DEBUG] [AS3] Validating AS3 schema with as3-schema-3.28.0-3-cis.json 2021/10/04 21:21:41 [DEBUG] [AS3] posting GET BIGIP AS3 Version request on https://10.2.0.7:8443/mgmt/shared/appsvcs/info 2021/10/04 21:21:43 [ERROR] [AS3] Response body unmarshal failed: invalid character '<' looking for beginning of value 2021/10/04 21:21:43 [ERROR] [AS3] Internal Error 2021/10/04 21:21:43 [CRITICAL] [INIT] Failed to initialize as3 agent, Internal ErrorSolved2.5KViews0likes3CommentsCIS and Kubernetes - Part 2: Install F5 Container ingress services
In our previous article, we have setup Kubernetes and calico with our BIG-IPs. Now we will setup F5 Container Ingress Services (F5 CIS) and deploy an ingress service. Note: in this deployment, we will setup F5 CIS in the namespace kube-system BIG-IPs Setup Setup our Kubernetes partition Before deploying F5 Container Ingress Services, we need to setup our BIG-IPs: Setup a partition that CIS will use. Install AS3 On EACH BIG-IP, you need to do the following: In the GUI, go to System > User > Partitions List and click on the Create button Create a partition called "kubernetes" and click Finished Next we need to download the AS3 extension and install it on each BIG-IP. Install AS3 Go to GitHub . Select the release with the "latest" tag and download the rpm. Once you have the rpm, go to iApps > Package Management LX and click the Import button (you need to run BIG-IP v12.1 or later) on EACH BIG-IP Chose your rpm and click Upload. Once the rpm has been uploaded and loaded, you should see this: Our BIG-IPs are setup properly now. Next, we will deploy F5 CIS CIS Deployment Connect to your Kubernetes cluster. We will need to setup the following: Create a Kubernetes secret to store our BIG-IPs credentials Setup a service account for CIS and setup RBAC Setup our CIS configuration. We will need 1xCIS per BIG-IP (even when BIG-IP is setup as a cluster - we don't recommend automatic sync between BIG-IPs) Deploy one CIS per BIG-IP Store our BIG-IP credentials in a kubernetes secret To store your credentials (login/password) in a kubernetes secret, you can run the following command (https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-secrets.html#secret-bigip-login): kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=<your_login> --from-literal=password=<your_password> In my setup, my credentials are the following: Login: admin Password: D3f4ult123 So we'll run the following command: kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=admin --from-literal=password=D3f4ult123 ubuntu@ip-10-1-1-4:~$ kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=admin --from-literal=password=D3f4ult123 secret/bigip-login created Create a Service Account /RBAC for CIS Now we need to create our service account (https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html#set-up-rbac-authentication): Run the following command: : kubectl create serviceaccount bigip-ctlr -n kube-system ubuntu@ip-10-1-1-4:~$ kubectl create serviceaccount bigip-ctlr -n kube-system serviceaccount/bigip-ctlr created Use your favorite editor to create this file: f5-k8s-sample-rbac.yaml: # for use in k8s clusters only # for OpenShift, use the OpenShift-specific examples kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: bigip-ctlr-clusterrole rules: - apiGroups: ["", "extensions"] resources: ["nodes", "services", "endpoints", "namespaces", "ingresses", "pods"] verbs: ["get", "list", "watch"] - apiGroups: ["", "extensions"] resources: ["configmaps", "events", "ingresses/status"] verbs: ["get", "list", "watch", "update", "create", "patch"] - apiGroups: ["", "extensions"] resources: ["secrets"] resourceNames: ["<secret-containing-bigip-login>"] verbs: ["get", "list", "watch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: bigip-ctlr-clusterrole-binding namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: bigip-ctlr-clusterrole subjects: - apiGroup: "" kind: ServiceAccount name: bigip-ctlr namespace: kube-system Deploy this config by running the following command: kubectl apply -f f5-k8s-sample-rbac.yaml ubuntu@ip-10-1-1-4:~$ kubectl apply -f f5-k8s-sample-rbac.yaml clusterrole.rbac.authorization.k8s.io/bigip-ctlr-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/bigip-ctlr-clusterrole-binding created Next step is to setup our CIS deployment files Create our CIS deployment configurations Use your favorite editor to create the following files: setup_cis_bigip1.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: k8s-bigip1-ctlr-deployment namespace: kube-system spec: selector: matchLabels: app: k8s-bigip1-ctlr # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: labels: app: k8s-bigip1-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=10.1.20.11", "--bigip-partition=kubernetes", "--insecure=true", "--pool-member-type=cluster", "--agent=as3" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login setup_cis_bigip2.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: k8s-bigip2-ctlr-deployment namespace: kube-system spec: selector: matchLabels: app: k8s-bigip2-ctlr # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: labels: app: k8s-bigip2-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=10.1.20.12", "--bigip-partition=kubernetes", "--insecure=true", "--pool-member-type=cluster", "--agent=as3" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login To deploy our CIS, run the following commands: kubectl apply -f setup_cis_bigip1.yaml kubectl apply -f setup_cis_bigip2.yaml You can make sure that they got deployed successfully: kubectl get pods -n kube-system You can review if it launched successfully with the commands: kubectl logs <pod_name> -n kube system ubuntu@ip-10-1-1-4:~$ kubectl logs k8s-bigip2-ctlr-deployment-6f674c8d58-bbzqv -n kube-system 2020/01/03 12:04:06 [INFO] Starting: Version: 1.12.0, BuildInfo: n2050-623590021 2020/01/03 12:04:06 [INFO] ConfigWriter started: 0xc0002bb950 2020/01/03 12:04:06 [INFO] Started config driver sub-process at pid: 14 2020/01/03 12:04:07 [INFO] NodePoller (0xc000b7c090) registering new listener: 0x11bfea0 2020/01/03 12:04:07 [INFO] NodePoller started: (0xc000b7c090) 2020/01/03 12:04:07 [INFO] Watching Ingress resources. 2020/01/03 12:04:07 [INFO] Watching ConfigMap resources. 2020/01/03 12:04:07 [INFO] Handling ConfigMap resource events. 2020/01/03 12:04:07 [INFO] Handling Ingress resource events. 2020/01/03 12:04:07 [INFO] Registered BigIP Metrics 2020/01/03 12:04:07 [INFO] [2020-01-03 12:04:07,663 __main__ INFO] entering inotify loop to watch /tmp/k8s-bigip-ctlr.config563475778/config.json 2020/01/03 12:04:08 [INFO] Successfully Sent the FDB Records You may also check your BIG-IPs configuration: if CIS has been able to connect successfully, it will have created *another* partition called "kubernetes_AS3".This is explained here: https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-use-as3-backend.html CIS will create partitionkubernetes_AS3to store LTM objects such as pools, and virtual servers. FDB, and Static ARP entries are stored inkubernetes. These partitions should not be managed manually. Application Deployment In this section, we will do the following: Deploy an application with 2 replicas (ie 2 instances of our app) Setup an ingress service to connect to your application Deploy our application and service Create the following service and deployment configuration files: f5-hello-world-app-http-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: f5-hello-world namespace: default spec: replicas: 2 selector: matchLabels: app: f5-hello-world template: metadata: labels: app: f5-hello-world spec: containers: - env: - name: service_name value: f5-hello-world image: f5devcentral/f5-hello-world:latest imagePullPolicy: Always name: f5-hello-world ports: - containerPort: 8080 protocol: TCP f5-hello-world-app-http-service.yaml apiVersion: v1 kind: Service metadata: name: f5-hello-world namespace: default labels: app: f5-hello-world spec: ports: - name: f5-hello-world port: 8080 protocol: TCP targetPort: 8080 type: NodePort selector: app: f5-hello-world Apply your deployment and service: kubectl apply -f f5-hello-world-app-http-deployment.yaml kubectl apply -f f5-hello-world-app-http-service.yaml ubuntu@ip-10-1-1-4:~$ kubectl apply -f f5-hello-world-app-http-deployment.yaml deployment.apps/f5-hello-world created ubuntu@ip-10-1-1-4:~$ kubectl apply -f f5-hello-world-app-http-service.yaml service/f5-hello-world created You can review your deployment and service with kubectl get : ubuntu@ip-10-1-1-4:~$ kubectl get pods NAMEREADYSTATUSRESTARTSAGE f5-hello-world-847698f5c6-59mmn1/1Running02m3s f5-hello-world-847698f5c6-pcz9v1/1Running02m3s ubuntu@ip-10-1-1-4:~$ kubectl get svc NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE f5-hello-worldNodePort10.97.83.208<none>8080:31380/TCP2m kubernetesClusterIP10.96.0.1<none>443/TCP63d Define our ingress service CIS ingress annotations can be reviewed here: https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.11/#ingress-resources Create the following file: f5-as3-ingress.yaml: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: singleingress1 namespace: default annotations: # See the k8s-bigip-ctlr documentation for information about # all Ingress Annotations # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest/#supported-ingress-annotations virtual-server.f5.com/ip: "10.1.10.80" virtual-server.f5.com/http-port: "443" virtual-server.f5.com/partition: "kubernetes_AS3" virtual-server.f5.com/health: '[{"path": "/", "send": "HTTP GET /", "interval": 5, "timeout": 10}]' spec: backend: # The name of the Service you want to expose to external traffic serviceName: f5-hello-world servicePort: 8080 you can review the BIG-IP configuration by going into the kubernetes_AS3 partition Summary In this article we saw how to deploy CIS with AS3 to handle ingress services. CIS has more capabilities than ingress that you may review here: https://clouddocs.f5.com/containers/v2/kubernetes/ (configmap, routes, …) ---- Relevant links to this article: https://www.f5.com/products/automation-and-orchestration/container-ingress-services https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-use-as3-backend.html https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-k8s-as3.html https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html2.4KViews0likes0CommentsEnable Consistent Application Services for Containers with CIS
Kubernetes is all about abstracting away complexity. As Kubernetes continues to evolve, it becomes more intelligent and will become even more powerful when it comes to helping enterprises manage their data center, not just at the cloud.While enterprises have had to deal with the challenges associated with managing different types of modern applications (AI/ML, Big data, and analytics) to process that data, they are faced with the challenge to maintain a top-level network/security policies and gaining better control of the workload, to ensure operational and functional consistency. This is where Cisco ACI and F5 Container Ingress Services come into the picture. F5 CIS and Cisco ACI Cisco ACI offers these customers an integrated network fabric for Kubernetes. Recently,F5 and Cisco joined forces byintegrating F5 Container Ingress Services (or CIS) with Cisco ACI to bring L4-7 services into Kubernetes environment, to further simplify the user experience in deploying, scaling and managing containerized applications. This integration specifically enables: Unified networking: Containers, VMs, and bare-metal Secure multi-tenancy and seamless integration of Kubernetes network policies and ACI policies A single point of automation with enhanced visibility for ACI and BIG-IP. F5 Application Services natively integrated in Container and PaaS Environments One of the key benefits for such implementation is the ACI encapsulation normalization. The ACI fabric, as the normalizer for the encapsulation, allows you to merge different network technologies or encapsulations be it vlan or vxlan into a single policy model. BIG-IP through a simple VLAN connection to ACI, with no need for additional gateway, can communicate with any service anywhere. Solution Deployment To integrate F5 CIS with the Cisco ACI forKubernetes environment, you perform a series of tasks. Some you perform in the network to set up the Cisco Application Policy Infrastructure Controller (APIC); others you perform on the Kubernetes server(s). Rather thangetting down to the nitty-gritty, I willjust highlightthe steps todeploy the joint solution. Pre-requisites The BIG-IP CIS and Cisco ACI joint solution deployment assumes that you have the following in place: A working Cisco ACI installation ACI must be integrated with vCenter with dVS Fabric tenant pre-provisioned with the required VRFs/EPGs/L3OUTs. BIG-IP already running for non-container workload Deploying Kubernetes Clusters to ACI Fabrics The following steps will provide you a complete cluster configuration: Step 1.Run ACI provisioning tool to prepare Cisco ACI to work with Kubernetes Cisco provides an acc_provision tool to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco Application Centric Infrastructure (ACI) container components. You can download the provisioning tool here. Next, you can use this provision tool to generate a sample configuration file that you can edit. $ acc-provision--sample > aci-containers-config.yaml We can now edit the sample configuration file to provide information from your network. With such configuration file, now you can run the following command to provision the CiscoACIfabric: acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password] Step 2. Prepare the ACI CNI Plugin configuration File The above command also generates the fileaci-containers.yamlthat you use after installing Kubernetes. Step 3.Preparing the Kubernetes Nodes - Set up networking for the node to support Kubernetes installation. With provisioned ACI, you start to prepare networking for the Kubernetes nodes. This includes steps such as Configuring the VMs interface toward the ACI fabric, Configuring a static route for the multicast subnet, Configuring the DHCP Client to work with ACIetc. Step 4.Installing Kubernetes cluster After you provision Cisco ACI and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose appropriate to your environment. Step 5.Deploy Cisco ACI CNI plugin When the Kubernetes cluster is up and running, you can copy the preciously generated CNI configuration to the master node, and install the CNI plug-in using the following command: kubectl apply -f aci-containers.yaml The command installs the following (PODs): ACI Containers Host Agent and OpFlex agent in a DaemonSet calledaci-containers-host Open vSwitch in a DaemonSet calledaci-containers-openvswitch ACI Containers Controller in a deployment calledaci-containers-controller. Other required configurations, including service accounts, roles, and security context For ‘the authoritative word on this specific implementation’, you can click here the workflow for integrating k8s into Cisco ACI for latest and greatest. After you have performed the previous steps, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. Each tenant will have the visibility of all the Kubernetes POD's. Install the BIG-IP Controller The F5 BIG-IP Controller (k8s-bigip-ctlr) or Container Ingress Services, if you aren't familiar, is a Kubernetes native service that provides the glue between container services and BIG-IP. It watches for changes and communicates those to BIG-IP delivered application services. These, in turn, keep up with the changes in container environments and enable enforcement of security policies. Once you have a running Kubernetes cluster deployed to ACI Fabric, you can follow these instructions to install BIG-IP Controller. Use the kubectl get command to verify that thek8s-bigip-ctlrPod launched successfully. BIG-IP asnorth-south load balancer forExternal Services For Kubernetes services that are exposed externally and need to be load balanced, Kubernetes does not handle the provisioning of the load balancing. It is expected that the load balancing network function is implemented separately. For these services, Cisco ACI takes advantage of the symmetric policy-based redirect (PBR) feature available in the Cisco Nexus 9300-EX or FX leaf switches in ACI mode. This is where BIG-IP Container Ingress Services (or CIS) comes into thepicture, as the north-south load balancer.On ingress, incoming traffic to an externally exposed service is redirected by PBR to BIG-IP for that particular service. If a Kubernetes cluster contains more than one IP pod for a particular service, BIG-IP will load balance the traffic across all the pods for that service. In addition, each new POD is added to BIG-IP pool automatically. Conclusion F5 CIS and Cisco ACI together offer a unified control, visibility, security and application services, for both container and non-container workload. Further Resources F5 Container Ingress Services Click here Cisco ACI and Kubernetes Integration Click here1.4KViews1like0CommentsDeploy OpenShift 4.x with BIG-IP CIS in AWS
OpenShift Container Platform (or OCP) provides theHAProxy template routeras the default plug-in as the ingress point for all external traffic. While this is fine for small scale deployments there are some significant challenges when looking to scale your OCP deployments beyond single cluster, single site deployments. As with any architectural design, we have to consider our desired ‘end state’ architecture.For example: Will your organization deploy applications across clusters as the environment starts to scale? How about agile development methodologies and blue/green A/B deployment scenarios, will the default ADC have the intelligence to automatically direct traffic between production and non-production workloads? How about failover and site resiliency? F5 BIG-IP provides these services using Container Ingress Services or CIS, with a more simplified architecture, to help your organization scale applications and services across clusters and sites. In addition, F5 BIG-IP offers advanced access and security control for the traffic going into or out of an OpenShift cluster, to ensure consistent policy enforcement and end to end compliance in any cloud. In this article, we're going to walk you through a fairly minimumdeployment of OpenShift 4.3 with BIG-IP CIS in Amazon Web Services (AWS). With such, you can enable more complex use cases. So let’s get started. Prerequisites If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Confirm AWS IAM user name that you are using to create OpenShift cluster is granted the AdministratorAccess policy. Make sure you have the access theInfrastructure Providerpage on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Configuring Route53 To install OpenShift Container Platform, the AWS account you use must have a dedicated public hosted zone in your Route53 service. This zone must be authoritative for the domain. The Route53 service provides cluster DNS resolution and name lookup for external connections to the OCP cluster. If you registered domain with Route53, you do not need any further configuration as a hosted zone was automatically created. If you use public domain hosted outside Route53, you would need do the following: Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation. Shared the NS record and SA record with your IT team for adding the entries in DNS. Provision OpenShift cluster Before you install OpenShift Container Platform, download the installation file on a local computer. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. The installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: tar xvf <installation_program>.tar.gz From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a .txt file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Run the installation program: ❯ ./openshift-install create cluster --dir ~/aws-ocp43 --log-level=info ? SSH Public Key /Users/zji/.ssh/id_rsa.pub ? Platform aws ? Region us-west-2 ? Base Domain <mybasedomain> ? Cluster Name cluster1 ? Pull Secret [? for help] ********************************************************************************* INFO Creating infrastructure resources INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster1.mybasedomain:6443... INFO API v1.16.2+f2384e2 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 30m0s for the cluster at https://api.cluster1.mybasedomain:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/zji/aws-ocp43/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster1.mybasedomain INFO Login to the console with user: kubeadmin, password: 00000-00000-00000-00000 What just happened? Let's review what just happened. The above installation program automatically set up the following AWS resources for Red Hat OpenShift environment: A virtual private cloud (VPC) that spans three Availability Zones, with one private and one public subnet in each Availability Zone. An internet gateway to provide internet access to each subnet. An OpenShift master ELB An OpenShift node ELB In the private subnets: Three OpenShift master (including etcd) instances in an Auto Scaling group Three OpenShift node instances in an Auto Scaling group Source: https://aws.amazon.com/quickstart/architecture/openshift/ As an account admin for AWS, you can list all these resources that OpenShift or its installer has created per cluster. ❯ aws resourcegroupstaggingapi get-resources --tag-filters "Key=kubernetes.io/cluster/cluster2-7j2jr" | jq '.ResourceTagMappingList[].ResourceARN' "arn:aws:ec2:us-west-2:877162104333:dhcp-options/dopt-0d8651a54eddb2acb" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-0c4b4d66dbf695655" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-077f8efc0cd8d0b01" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-05001638bc043f0cd" "arn:aws:ec2:us-west-2:877162104333:elastic-ip/eipalloc-03abd4c3fb87a7a7d" ... Logging in to the cluster Next, you can install the CLI in order to interact with OpenShift Container Platform using a command-line interface.You can log in to your cluster as a default system user by exporting the cluster kubeconfig file Export the kubeadmin credentials: $ export KUBECONFIG=<installation_directory>/auth/kubeconfig Verify you can run oc commands successfully using the exported configuration: $ oc whoami kube:admin ❯ oc get node NAMESTATUSROLESAGEVERSION ip-10-0-128-147.us-west-2.compute.internalReadyworker26mv1.16.2+f2384e2 ip-10-0-141-160.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-149-163.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-152-36.us-west-2.compute.internalReadyworker26mv1.16.2+f2384e2 ip-10-0-160-247.us-west-2.compute.internalReadymaster34mv1.16.2+f2384e2 ip-10-0-169-120.us-west-2.compute.internalReadyworker25mv1.16.2+f2384e2 Simplify Load Balancing with BIG-IP By default, OpenShift deployment instantiates the build-inHAProxy template routeras the default router. For OpenShift in AWS, it also deploys an AWS ELB as the frontend L4 load balancer, resulting in a two-layer load balancer architecture as illustrated below. Some patterns insert yet another layer of scalability across clusters. F5 BIG-IP simplifies the architecture with a single layer of load balancer where the BIG-IP is exposed directly to the Internet and also performs L7 routing including SSL off-loading, thus improves performance of apps served from the cluster and scalability of the overall architecture. It also offers additional benefits. You can further reduce latency by adding Advanced WAF, Access Policy control, intelligence traffic management, many more application delivery and security offerings by BIG-IP. Follow the steps to deploy BIG-IP into existing VPC: https://clouddocs.f5.com/cloud/public/v1/aws_index.html Next, you can refer to F5 CIS user guide to deploy and configure CIS for OpenShift. If you deploy BIG-IP CIS as cluster mode, you may implement VXLAN to route the traffic between BIG-IP and OpenShift Cluster. By default, direct access to OpenShift nodes is limited. To support VXLAN traffic from BIG-IP, you want to adjust the OpenShift security group accordingly by exposing additional ports as following: You can verify that F5 BIG-IP CIS is successfully installed: ❯ oc get pods -n kube-system -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES k8s-bigip-ctlr-6664d45f57-cjb8g1/1Running015d10.131.0.46ip-10-0-222-250.us-west-2.compute.internal<none><none> Summary Red Hat provides an excellent foundation for building a production ready OpenShift in AWS environment, BIG-IP CIS can further simplify the architecture and improve performance by converging the 2-tier load balancing into single layer. In addition, BIG-IP can provide advanced application delivery and security features, and we will cover more use cases in the following articles.869Views0likes2CommentsDigital Transformation in Financial Services Using Production Grade Kubernetes Deployment
The Banking and Financial Services Industry (BFSI) requires the speed of modern application development in order to shorten the time it takes to bring value to their customers. But they also face the constraints of security and regulatory requirements that tend to slow down the development and deployment process. F5 and NGINX bring the security and agile development technology while Red Hat OpenShift provides the modern development architecture needed to achieve the speed and agility required by BFSI companies.296Views0likes0CommentsUpgrade to CIS 2.16.1 with CCCL agent
Hello colleagues, I have read in the release notes [1] that AS3 was introduced in 1.9.0 I know that ConfigMap(s) in CIS differs from agent to agent and from CIS v2.0, AS3 is the default agent. We have CIS 1.8.1 and would like to upgrade to 2.16.1. Could we use deployment argument --agent to configure CCCL agent and upgrade to 2.16.1 ? PD: I know the recommendation is to migrate to agent AS3 but as for now, we only want to upgrade our k8s-bigip-ctlr [1] https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.14/RELEASE-NOTES.html#v1-9-020Views0likes0CommentsCIS CRD http compression profile
Hi all, I am trying to add http compression profile to my virtual server with cis CRD. So far, I came across with two solutions. ConfigMap: However, in my scenario, I need to use only CRD. iRule: I attempted to create two iRules to mimic the functionality of the httpcompression and wan-optimized-compression profiles. However, I'm uncertain if they fully replicate the intended functionality. Below are the iRules I created: httpcompression when HTTP_RESPONSE { COMPRESS::disable if { [HTTP::header "Content-Length"] >= 1024 && ( [HTTP::header "Content-Type"] starts_with "text/" || [HTTP::header "Content-Type"] starts_with "application/xml" || [HTTP::header "Content-Type"] starts_with "application/x-javascript" ) } { COMPRESS::gzip memory_level 8 COMPRESS::gzip window_size 16 COMPRESS::gzip level 1 COMPRESS::buffer_size 4096 COMPRESS::enable } } wan-optimized-compression when HTTP_RESPONSE { COMPRESS::disable if { [HTTP::header "Content-Length"] >= 1024 && ( [HTTP::header "Content-Type"] starts_with "text/" || [HTTP::header "Content-Type"] starts_with "application/xml" || [HTTP::header "Content-Type"] starts_with "application/x-javascript" ) } { COMPRESS::gzip memory_level 16 COMPRESS::gzip window_size 64 COMPRESS::gzip level 1 COMPRESS::buffer_size 131072 COMPRESS::enable } } Do you have any suggestions for further improvements? Also, where can I submit a feature request to add HTTP compression profile support to the CRD? Thank you in advance for your assistance.19Views0likes0Comments