containers
13 TopicsBIG-IP deployment options with Openshift
NOTE: this article has been superseded by these updated articles: F5 BIG-IP deployment with OpenShift - platform and networking options F5 BIG-IP deployment with OpenShift - publishing application options NOTE: outdated content next This article is meant to be an agnostic overview of the possibilities on how to use BIG-IP with RedHat Openshift: either onprem or in the cloud, either in 1-tier or in 2-tier arrangements, possibly alongside NGINX+. This blog is structured as follows: Introduction BIG-IP platform flexibility: deployment, scalability and multi-tenancy options Openshift networking options BIG-IP networking options 1-tier arrangement 2-tier arrangement Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer Ingress and Route resources, the extensibility problem. Full flexibility & advanced services with AS3 Configmaps. F5 Custom Resource Definitions (CRDs). Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration Conclusion Introduction When using BIG-IP with RedHat Openshift Kubernetes a container component named Container Ingress Services (CIS from now on) is used to plug the BIG-IP APIs with the Kubernetes APIs. When a user configuration is applied or when a status change has occurred in the cluster then CIS automatically updates the configuration in the BIG-IP using the AS3 declarative API. CIS supports IP Address Management (IPAM from now on) by making use of F5 IPAM Controller (FIC from now on), which is deployed as container as well. The FIC IPAM controller can have it's own address database or be connected to an external provider such as Infoblox. It can be seen how these components fit together in the next picture. A single BIG-IP cluster can manage both VM and container workloads in the same cluster and separation between these can be set at administrative level with partitions and at network level with routing domains if required. BIG-IP offers a wide range of options to be used with RedHat Openshift. Often these have been driven by customer's requests. In the next sections we cover these options and the considerations to be taken into account to choose between them. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you wlll be find additional technical details. Please comment below if you have any question about this article. BIG-IP platform flexibility: deployment, scalability and multi-tenancy options First of all, it is needed to clarify that regardless of the deployment option chosen, this is independent of the BIG-IP being an appliance, a scale-out chassis or a Virtual Edition. The configuration is always the same. This platform flexibility also opens the possibilities of using different options of scalability, multi-tenancy, hardware accelerators or HSMs/NetHSMs/SaaS-HSMs to keep secure the SSL/TLS private keys in a FIPS compliant manner. The following options apply to a single BIG-IP cluster: A single BIG-IP cluster can handle several Openshift clusters. This requires at least a CIS instance per Openshift cluster instance. It is also possible that a given CIS instance manages a selected set of namespaces. These namespaces can be specified with a list or a label selector. In the BIG-IP each CIS instance will typically write in a dedicated partition, isolated from other CIS instances. When using AS3 ConfigMaps a single CIS can manage several BIG-IP partitions. As indicated in picture, a single BIG-IP cluster can scale-up horizontally with up to 8 BIG-IP instances, this is referred as Scale-N in BIG-IP documentation. When hard tenant isolation is required, then using a single BIG-IP cluster or a vCMP guest instance should be used. vCMP technology can be found in larger appliances and scale-out chassis. vCMP allows to run several independent BIG-IP instances as guests, allowing to run even different versions of BIG-IP. The guest can get allocated different amounts of hardware resources. In the next picture, guests are shown in different colored bars using several blades (grey bars). Openshift networking options Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and Openshift supports the following: OpenshiftSDN - supported since Openshift 3.x and still the default CNI. It makes use of VXLAN encapsulation. OVNKubernetes - supported since Openshift 4.4. It makes use of Geneve encapsulation. Feature wise these CNIs we can compare them from the next table from the Openshift documentation. Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading, reducing the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs, this is suggested in Openshift's documentation as well, and needs to be set at installation time in the install-config.yaml file, see this link for details. BIG-IP networking options The first thing that needs to be decided is how we want the BIG-IP to access the PODs: do we want that the BIG-IP access the PODs directly or do we want to use the typical arrangement of using a 2-tier Load Balancing with an in-cluster Ingress Controller? Equally important is to decide how we want to do NetOps/DevOps separation. CI/CD pipelines provide a management layer which allow several teams to approve or block changes before committing. We are going to takle how to achieve this separation without such an additional management layer. BIG-IP networking option - 1-tier arrangement In this arrangement, the BIG-IP is able to reach the PODs without any address translation . By only using a 1-tier of Load Balancing (see the next picture) the latency is reduced (potentially also increasing client's session performance). Persistence is handled easily and the PODs can be directly monitored, providing an accurate view of the application's health. As it can be seen in the picture above, in a 1-tier arrangement the BIG-IP is part of the CNI network. This is supported for both OpenshiftSDN and OVNKubernetes CNIs. Configuration for BIG-IP with OpenshiftSDN CNI can be found in clouddocs.f5.com. Currently, when using the OVNKubernetes CNI the hybrid-networking option has to be used. In this later case the Openshift cluster will extend its CNI network towards the BIG-IPs using VXLAN encapsulation instead of Geneve used internally within the Openshift nodes. BIG-IP configuration steps for OVNKubernetes in hybrid mode can be followed in this repository created by F5 PM Engineer Mark Dittmer until this is published in clouddocs.f5.com. With a 1-tier configuration there is a fine demarcation line between NetOps (who traditionally managed the BIG-IPs) and DevOps that want to expose their services in the BIG-IPs. In the next diagram it is proposed a solution for this using the IPAM cotroller. The roles and responsibilities would be as follows: The NetOps team would be responsible of setting up the BIG-IP along its basic configuration, up to the the network connectivity towards the cluster including the CNI overlay. The NetOps team would be also responsible of setting up the IPAM Controller and with it the assignment of the IP addresses for each DevOps team or project. The NetOps team would also setup the CIS instances. Each DevOps team or set of projects would have their own CIS instance which would be fed with IP addresses from the IPAM controller. Each CIS instance would be watching each DevOps or project's namespaces. These namespaces are owned by the different DevOps teams. The CIS configuration will specify the partition in the BIG-IP for the DevOps team or project. The DevOps team, as expected, deploys their own applications and create Kubernetes Service definitions for CIS consumption. Moreover, the DevOps team will also define how the Services will be published. These means creating Ingress, Route or any other CRD definition for publishing the services which are constrained by NetOps-owned IPAM controller and CIS instances. BIG-IP networking option - 2-tier arrangement This is the typical way in which Kubernetes clusters are deployed. When using a 2-tier arrangement the External Load Balancer doesn't need to have awareness of the CNI and points to the NodePort addresses of the Ingress Controller inside the Kubernetes cluster. It is up to the infrastructure how to send the traffic to the Ingress Controllers. A 2-tier arrangement sets a harder line of the demarcation between the NetOps and DevOps teams. This type of arrangement using BIG-IP can be seen next. Most External Load Balancers can only perform L4 functionalities but BIG-IP can perform both L4 and L7 functionalities as we will see in the next sections. Note: the proxy protocol mentioned in the diagram is used to allow persistence based on client's IP in the Ingress Controller, regardless the traffic is sent encrypted or not. Publishing the applications: BIG-IP CIS Kubernetes resource types Service type Load Balancer This is a Kubernetes built-in mechanism to expose Ingress Controllers in any External Load Balancer. In other words, this method is meant for 2-tier topologies. This mechanism is very feature limited feature and extensibility is done by means of annotations. F5 CIS supports IPAM integration in this resource type. Check this link for all options possible. In general, a problem or limitation with Kubernetes annotations (regardless the resource type) is that annotations are not validated by the Kubernetes API using a chema therefore allowing the customer to set in Kubernetes bad configurations. The recommended practice is to limit annotations to simple configurations. Declarations with complex annotations will tend to silently fail or not behave as expected. Specially in these cases CRDs are recommended. These will be described further down. Ingress and Route resources, the extensibility problem. Kubernetes and Openshift provide the following resource types for publishing L7 routes for HTTP/HTTPS services: Routes: Openshift exclusive, eventually going to be deprecated. Ingress: Kubernetes standard. Although these are simple to use, they are very limited in functionality and more often than not the Ingress Controllers require the use of annotations to agument the functionality. F5 available annotations for Routes can be checked in this link and for Ingress resources in this link. As mentioned previously, complex annotations should be avoided. When publishing L7 routes, annotation's limitations are more evident and CRDs are even more recommended. Route and Ingress resources can be further augmented by means of using the CIS feature named Override AS3 ConfigMap which allows to specify an AS3 declaration and attach it to a Route or Ingress definition. This gives access to use almost all features & modules available in BIG-IP as exhibit in the next picture. Although Override AS3 ConfigMap eliminates the annotations extensibility limitations it shares the problem that these are not validated by the Kubernetes API using the AS3 schema. Instead, it is validated by CIS but note that ConfigMaps are not capable of reporting the status the declaration. Thus the ConfigMap declaration status can only be checked in CIS logs. Override AS3 ConfigMaps declarations are meant to be applied to the all the services published by the CIS instance. In other words, this mechanism is useful to apply a general policy or shared configuration across several services (ie: WAF, APM, elaborated monitoring). Full flexibility and advanced services with AS3 ConfigMap The AS3 ConfigMap option is similar to Override AS3 ConfigMap but it doesn't rely in having a pre-existing Ingress or a Route resource. The whole BIG-IP configuration is setup in the ConfigMap. Using Full AS3 ConfigMaps with the --hubmode CIS option allows to define the services in a DevOps' owned namespaces and the VIP and associated configurations (ie: TLS settings, IP intelligence, WAF policy, etc...) in a namespace owned by the DevOps team. This provides independence between the two teams. Override AS3 ConfigMaps tend to be small because these are just used to patch the Ingress and Route resources. In other words, extending Ingress and Route-generated AS3 configuration. On the other hand, using full AS3 ConfigMaps require creating a large AS3 JSON declaration that Ingress/Route users are not used to. Again, the AS3 definition within the ConfigMap is validated by BIG-IP and not by Kubernetes which is a limitation because the status of the configuration can only be fully checked in CIS logs. F5 Custom Resource Definitions (CRDs) Above we've seen the Kubernetes built-in resource types and their advanced services & flexibility limitations. We've also seen the swiss-army knife that AS3 ConfigMaps are and the limitation of it not being Kubernetes schema-validated. Kubernetes allows API augmentation by allowing Custom Resource Definitions (CRDs) to define new resource types for any functionality needed. F5 has created the following CRDs to provide the easiness of built-in resource types but with greater functionality without requiring annotations. Each CRD is focused in different use cases: IngressLink aims to simplify 2-tier deployments when using BIG-IP and NGINX+. By using IngressLink CRD instead of a Service of type LoadBalancer. At present the IngressLink CRD provides the following features : Proxy Protocol support or other customizations by using iRules. Automatic health check monitoring of NGINX+ readiness port in BIG-IP. It's possible to link with NGINX+ either using NodePort or Cluster mode, in the later case bypassing any kube-proxy/iptables indirection. More to come... When using IngressLink it automatically exposes both ports 443 and port 80 sending the requests to NGINX+ Ingress Controller. TransportServer is meant to expose non-HTTP traffic configuration, it can be any TCP or UDP traffic on any traffic and it offers several controls again, without requiring using annotations. VirtualServer has L7 routes oriented approach analogous to Ingress/Route resources but providing advanced configurations whilst avoiding using annotations or override AS3 ConfigMaps. This can be used either in a 1 tier or 2-tier arrangement as well. In the later case the BIG-IP would take the function of External LoadBalancer of in-cluster Ingress Controllers yet providing advanced L7 services. All these new CRDs support IPAM. Summary of BIG-IP CIS Kubernetes resource types So what resource types should It be used? The next tables try to summarize the features, strengths and usability of them. Easeof use Network topology and overall suitability Comparing CRDs, Ingress/Routes and ConfigMaps Please note that the features of the different resources is continuously changing please check the latest docs for more up to date information. Installing Container Ingress Services (CIS) for Openshift & BIG-IP integration CIS Installation can be performed in different ways: Using Kubernetes resources (named manual in F5 clouddocs) - this approach is the most low level one and allows for ultimate customization. Using Helm chart. This provides life-cycle management of the CIS installation in any Kubernetes cluster. Using CIS Operator. Built on top of the Helm chart it additionally provides Openshift integrated management. In the screenshots below we can see how the Openshift Operator construct allows for automatic download and updates. We can also see the use of the F5BigIpCtlr resource type to configure the different instances At present IPAM controller installation is only done using Kubernetes resources. After these components are created it is needed to create the VxLAN configuration in the BIG-IP, this can be automated using using any of BIG-IP automations, mainly Ansible and Terraform. Conclusion F5 BIG-IPs provides several options for deployment in Openshift with unmatched functionality either used as External Load Balancer as Ingress Controller achieving a single Tier setup. Three components are used for this integrator: The F5 Container Ingress Services (CIS) for plugging the Kubernetes API with BIG-IP. The F5 ConOpenshift Operator for installing and managing CIS. The F5 IPAM controller. Resource types are the API used to define Services or Ingress Controllers publishing in the F5 BIG-IP. These are constantly being updated and it is recommended to check F5 clouddocs for up to date information. We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.3KViews1like3CommentsCalico, Kubernetes and BIG-IP
In order to follow along with this post you will need a couple things. First, a working Kubernetes deployment. If you don't have one following this link will get you up and running.The second thing you will need is a BIG-IP. Don't have one? Click here. You will need the advanced routing modules to get BGP working.The last thing is a Container Connector. If you don't have one of those yet you can pull it from Docker Hub. Now, it's a lot of work to get all these pieces up and running. An easier solution would be to automate it, as we do, using OpenStack and heat templates. This gets your services up and runningquickly, and identical every time. Need to update? Change your heat template. Broke your stack? Use the heat template.And that's it. Let's Get Started Spin up a simple nginx service in your kubernetes deployment so we can see all this magic happening. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #008200; -webkit-text-stroke: #008200} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #336699; -webkit-text-stroke: 0px #336699} span.s5 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000} # Run the Pods. kubectl run nginx --replicas=2 --image=nginx # Create the Service. kubectl expose deployment nginx --port=80 # Run a Pod and try to access the `nginx` Service. $ kubectl run access --rm -ti --image busybox /bin/sh Waiting for pod policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. / # wget -q nginx -O - You should see a response from nginx. Great! Our Service is accessible. You can exit the Pod now. Installing Calico Let's install Calico next. Following along here, it's super easy to install and we use the provided config map almost exactly, the only change we make is to the IP pool. Calicoctl can be ran two ways, the first is through a docker container which allows most commands to work. This can be useful just for a quick check but can become cumbersome if running a lot of commands and doesn't allow you to run commands that need access to the PID namespace or files on the host without adding volume mounts. To install calicoctl on the node: wget https://github.com/projectcalico/calico-containers/releases/download/v1.0.1/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/bin To run calicoctl through a docker container: docker run -i --rm --net=host calico/ctl:v1.0.1 version Now, this is where things get a bit more interesting. We need to setup the BGP peering in Calico so it can advertise our endpoints to the BIG-IP. Take note of the asNumber Calico is using. You can set your own or use the default then run the create command. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #c7254e; -webkit-text-stroke: 0px #c7254e} calicoctl config get asnumber cat << EOF | calicoctl create -f - apiVersion: v1 kind: bgpPeer metadata: peerIP: 172.16.1.6 scope: global spec: asNumber: 64511 EOF I'm using the typical 3-interface setup: A management interface, an external (web-facing) interface, and an internal interface (172.16.1.6 in my example) that connects to the Kubernetes cluster. So replace 172.16.1.6 with whatever IP address you've assigned to the BIG-IP's internal interface. Verify it was setup correctly: sudo calicoctl node status You should recieve an output similar to this: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} Calico process is running. IPv4 BGP status +--------------+-------------------+-------+------------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+------------+-------------+ | 172.16.1.6 | node-to-node mesh | up | 2017-01-30 | Established | | 172.16.1.7 | node-to-node mesh | up | 2017-01-30 | Established | | 172.16.1.3 | global | up | 21:41:21 | Established | +--------------+-------------------+-------+------------+-------------+ The "global" peer type will show "State:start" and "Info:Active" (or another status showing it isn't connected) until we get the BIG-IP configured to handle BGP. Once the BIG-IP has been configured it should show "State:up" and "Info:Established" as you can see above. If for some reason you need to remove BIG-IP as a BGP peer, you can run: sudo calicoctl delete bgppeer --scope=global 172.16.1.6 We aren't done in our Kubernetes deployment yet. It's time for the important piece... the Container Connecter (known as the CC). Container Connecters To start let's setup a secret to hold our BIG-IP information - User name, Password and URL. If you need some help with secrets check here We setup our secrets through a yaml similar to the below and apply it by running kubectl create -f secret.yaml p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} apiVersion: v1 items: - apiVersion: v1 data: password: xxx url: xxx username: xxx kind: Secret metadata: name: bigip-credentials namespace: kube-system type: Opaque kind: List metadata: {} Something important to note about the secret: it has to be in the same namespace you deploy the CC in. If it's not the CC won't be able to update your BIG-IP. Let's take a look at the config we are going to use to deploy the CC: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: f5-k8s-controller namespace: kube-system spec: replicas: 1 template: metadata: name: f5-k8s-controller labels: app: f5-k8s-controller spec: containers: - name: f5-k8s-controller # Specify the path to your image here image: "path/to/cc/image:latest" env: # Get sensitive values from the bigip-credentials secret - name: BIGIP_USERNAME valueFrom: secretKeyRef: name: bigip-credentials key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: name: bigip-credentials key: password - name: BIGIP_URL valueFrom: secretKeyRef: name: bigip-credentials key: url command: ["/app/bin/f5-k8s-controller"] args: - "--bigip-url=$(BIGIP_URL)" - "--bigip-username=$(BIGIP_USERNAME)" - "--bigip-password=$(BIGIP_PASSWORD)" - "--namespace=default" - "--bigip-partition=k8s" - "--pool-member-type=cluster" Important pieces of this config are the path to your CC image and the last three arguments we are going to start up the CC with. The namespace argument is the namespace you want the CC to watch for changes on. Our nginx service is in default so we want to watch default. The bigip-partition is where the CC will create your virtual servers. This partition has to already exist on your BIG-IP and it can NOT be your "Common" partition (the CC will take over the partition and manage everything inside). And the last one, pool-member-type. We are using cluster because Calico allows us to see all endpoints (pods in the nginx service) which is the whole point of this post! You can also leave this argument off and it will default to a NodePort setup but that won't allow you to take advantage of BIG-IP's advanced load balancing across all endpoints. To deploy the CC we run kubectl create -f cc.yaml And to verify everything looks good kubectl get deployment f5-k8s-controller --namespace kube-system We are almost done setting up the CC but we have one last piece, telling it what needs to be configured on the BIG-IP. To do that we use a ConfigMap and run kubectl create -f vs-config.yaml p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s3 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000} kind: ConfigMap apiVersion: v1 metadata: name: example-vs namespace: default labels: f5type: virtual-server data: schema: "f5schemadb://bigip-virtual-server_v0.1.1.json" data: | { "virtualServer": { "frontend": { "balance": "round-robin", "mode": "http", "partition": "k8s", "virtualAddress": { "bindAddr": "172.17.0.1", "port": 80 } }, "backend": { "serviceName": "nginx", "servicePort": 80 } } } Let's talk about what we are seeing. The virtual server section is what gets applied to the BIG-IP and concatenated with information from Kubernetes about the service endpoints. So, frontend is BIG-IP configuration options and backend is Kubernetes options. Of note, the bindAddr is where your traffic is coming into from the outside world in our demo world, so 172.17.0.1 is my internet-facing IP address. Awesome, if all went well you should now be able to access the BIG-IP GUI and see your virtual server being configured automatically. If you want to see something really cool (and you do) run kubectl edit deployment nginxand under 'spec' update the 'replicas' count to 5 then go check the virtual server pool members on the BIG-IP. Cool huh? The BIG-IP can see our pool members but it can't actually route traffic to them. We need to setup the BGP peering on our BIG-IP so it has a map to get traffic to the pool members. BIG-IP BGP Peering From the BIG-IP GUI do these couple of steps to allow BGP peering to work: Network >> Self IPs >> selfip.internal (or whatever you called your internal network) Add a custom TCP port of 179 - this allows BGP peering through Go to Network >> Route Domains >> 0 (again, if you called it something different, use that) Under Dynamic Routing Protocols move "BGP" to Enabled and push Update. We are done with the GUI, lets SSH into our BIG-IP and keep moving. Explanation of these commands are beyond the scope of this post so if you are confused or want more info check the docs here and here p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #ff2f93; -webkit-text-stroke: #ff2f93} span.s1 {font-kerning: none} imish enable configure terminal router bgp 64511 neighbor group1 peer-group neighbor group1 remote-as 64511 neighbor 172.0.0.0 peer-group group1 You will need to add all your nodes that you want to peer from your Kubernetes deployment. As an example, if you have 1 master and 2 workers you need to add all 3. Let's check some of our configuration outputs on the BIG-IP to verify we are seeing what is expected. From the BIG-IP run ip route There will be some output but what we are looking for is something like 10.4.0.1/26 via 172.16.1.9 dev internal proto zebra where 10.4.0.1 is your pod IP and 172.16.1.9 is your node IP. This tells us that the BIG-IP has a route to our pods by going through our nodes! If everything is looking good it's time to test it out end to end. Curl your BIG-IP's external IP and you will get a response from your nginx pod and that's it! You now have a working end to end setup using a BIG-IP for your load balancing, Calico advertising BGP routes to the endpoints and Kubernetes taking care of your containers. The Container Connector and other products for containerized environments like the Application Service Proxy are available for use today. For more information check out our docs. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 16.0px; font: 14.0px Courier; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none}1.7KViews0likes0CommentsCIS and Kubernetes - Part 1: Install Kubernetes and Calico
Welcome to this series to see how to: Install Kubernetes and Calico (Part 1) Deploy F5 Container Ingress Services (F5 CIS) to tie applications lifecycle to our application services (Part 2) Here is the setup of our lab environment: BIG-IP Version: 15.0.1 Kubernetes component: Ubuntu 18.04 LTM We consider that your BIG-IPs are already setup and running: Licensed and setup as a cluster The networking setup is already done Part 1: Install Kubernetes and Calico Setup our systems before installing kubernetes Step1: Update our systems and install docker To run containers in Pods, Kubernetes uses a container runtime. We will use docker and follow the recommendation provided here As root on ALL Kubernetes components (Master and Node): # Install packages to allow apt to use a repository over HTTPS apt-get -y update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common # Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - # Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # Install Docker CE. apt-get -y update && apt-get install -y docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker We may do a quick test to ensure docker run as expected: docker run hello-world Step2: Setup Kubernetes tools (kubeadm, kubelet and kubectl) To setup Kubernetes, we will leverage the following tools: kubeadm: the command to bootstrap the cluster. kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers. kubectl: the command line util to talk to your cluster. As root on ALL Kubernetes components (Master and Node): curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get -y update We can review which version of kubernetes is supported with F5 Container Ingress Services here At the time of this article, the latest supported version is v1.13.4. We'll make sure to install this specific version with our following step apt-get install -qy kubelet=1.13.4-00 kubeadm=1.13.4-00 kubectl=1.13.4-00 kubernetes-cni=0.6.0-00 apt-mark hold kubelet kubeadm kubectl Install Kubernetes Step1: Setup Kubernetes with kubeadm We will follow the steps provided in the documentation here As root on the MASTER node (make sure to update the api server address to reflect your master node IP): kubeadm init --apiserver-advertise-address=10.1.20.20 --pod-network-cidr=192.168.0.0/16 Note: SAVE somewhere the kubeadm join command. It is needed to "assimilate" the node later. In my example, it looks like the following (YOURS WILL BE DIFFERENT): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 Now you should NOT be ROOT anymore. Go back to your non root user. Since i use Ubuntu, i'll use the default "ubuntu" user Run the following commands as highlighted in the screenshot above: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Step2: Install the networking component of Kubernetes The last step is to setup the network related to our k8s infrastructure. In our kubeadm init command, we used --pod-network-cidr=192.168.0.0/16 in order to be able to setup next on network leveraging Calico as documented here kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml You may monitor the deployment by running the command: kubectl get pods --all-namespaces After some time (<1 min), everything shouldhave a "Running" status. Make sure that CoreDNS started also properly. If everything is up and running, we have our master setup properly and can go to the node to setup k8s on it. Step3: Add the Node to our Kubernetes Cluster Now that the master is setup properly, we can assimilate the node. You need to retrieve the "kubeadmin join …" command that you received at the end of the "kubeadm init …" cmd. You must run the following command as ROOT on the Kubernetes NODE (remember that you got a different hash and token, the command below is an example): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 We can check the status of our node by running the following command on our MASTER (ubuntu user) kubectl get nodes Both component should have a "Ready" status. Last step is to setup Calico between our BIG-IPs and our Kubernetes cluster Setup Calico We need to setup Calico on our BIG-IPs and k8S components. We will setup our environment with the following AS Number: 64512 Step1: BIG-IPs Calico setup F5 has documented this procedure here We will use our self IPs on the internal network. Therefore we need to make sure of the following: The self IP has a portlock down setup to "Allow All" Or add a TCP custom port to the self IP: TCP port 179 You need to allow BGP on the default route domain 0 on your BIG-IPs. Connect to the BIG-IP GUI on go into Network > Route domain. Click on Route Domain "0" and allow BGP Click on "Update" Once this is done,connect via SSH and get into a bash shell on both BIG-IPs Run the following commands: #access the IMI Shell imish #Switch to enable mode enable #Enter configuration mode config terminal #Setup route bgp with AS Number 64512 router bgp 64512 #Create BGP Peer group neighbor calico-k8s peer-group #assign peer group as BGP neighbors neighbor calico-k8s remote-as 64512 #we need to add all the peers: the other BIG-IP, our k8s components neighbor 10.1.20.20 peer-group calico-k8s neighbor 10.1.20.21 peer-group calico-k8s #on BIG-IP1, run neighbor 10.1.20.12 peer-group calico-k8s #on BIG-IP2, run neighbor 10.1.20.11 peer-group calico-k8s #save configuration write #exit end You can review your setup with the command show ip bgp neighbors Note: your other BIG-IP should be identified with a router ID and have a BGP state of "Active". The k8s node won't have a router ID since BGP hasn't already been setup on those nodes. Keep your BIG-IP SSH sessions open. We'll re-use the imish terminal once our k8s components have Calico setup Step2: Kubernetes Calico setup On the MASTER node (not as root), we need to retrieve the calicoctl binary curl -O -Lhttps://github.com/projectcalico/calicoctl/releases/download/v3.10.0/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/local/bin We need to setup calicoctl as explained here sudo mkdir /etc/calico Create a file /etc/calico/calicoctl.cfg with your preferred editor (you'll need sudo privilegies). This file should contain the following apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/ubuntu/config" Note: you may have to change the path specified by the kubeconfig parameter based on the user you use to do kubectl command To make sure that calicoctl is properly setup, run the command calicoctl get nodes You should get a list of your Kubernetes nodes Now we can work on our Calico/BGP configuration as documented here On the MASTER node: cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: true asNumber: 64512 EOF Note: Because we setup nodeToNodeMeshEnabled to True, the k8s node will receive the same config We may now setup our BIG-IP BGP peers. Replace the peerIP Value with the IP of your BIG-IPs cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip1 spec: peerIP: 10.1.20.11 asNumber: 64512 EOF cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip2 spec: peerIP: 10.1.20.12 asNumber: 64512 EOF Review your setup with the command: calicoctl get bgpPeer If you go back to your BIG-IP SSH connections, you may check that your Kubernetes nodes have a router ID now in your BGP configuration: imish show ip bgp neighbors Summary So far we have: Setup Kubernetes Setup Calico between our BIG-IPs and our Kubernetes cluster In the next article, we will setup F5 container Ingress Services (F5 CIS)4.2KViews1like1CommentSimplifying Kubernetes Ingress using F5 Technologies
Kubernetes Ingress is an important component to any Kubernetes environment as you're likely trying to build applications that need to be accessed from outside of the k8s environment. F5 provides both BIG-IP and NGINX approaches to Ingress and with that, the breadth of F5 solutions can be applied to a Kubernetes environment. This might be overwhelming if you don't have experience in all of those solutions and you may just want to simply expose an application to start. Mark Dittmer, Sr. Product Management Engineer at F5, has put together a simple walkthrough guide for how to configure Kubernetes Ingress using F5 technologies. He incorporated both BIG-IP Container Ingress Services and NGINX Ingress Controller in this walkthrough. By the end, you'll be able to securely present your k8s Service using an IP that is dynamically provisioned from a range you specify and leverage the Service Type LoadBalancer. Simple as that! GitHub repo: https://github.com/mdditt2000/k8s-bigip-ctlr/tree/main/user_guides/simplifying-ingress1.4KViews0likes0CommentsBIG-IP in Tanzu Kubernetes Grid in an NSX-T network
Introduction BIG-IP in Tanzu Kubernetes Grid provides a Ingress solution which is implemented with a single tier of Load Balancing. Typically Ingress requirea an in-cluster Ingress Controller and an external Load Balancer. By using BIG-IP, Ingress services get greatly simplified, while improving the perfomance by removing one hop and at the same time exposing all the BIG-IP's advanced load balancing functionalities and security modules. Tanzu Kubernetes Grid is a Kubernetes distribution supported by VMware that comes with the choice of two CNIs: Antrea - Geneve overlay based. Calico - BGP based, no overlay networking. When using Antrea in NSX-T environments Antrea uses it's own Geneve overlay on top of NSX-T's own Geneve overlay networking. This makes the communications in the TKG cluster to have the overhead of two encapsulations as it can be seen in the next wireshark capture. Modern NICs are able to off-load the CPUs from the task of handling Geneve's encapsulation but these are not able to cope with double-encapsulation as it is the case when using Antrea+NSX-T. In this blog we will describe how to setup BIG-IP with TKG's Calico CNI which doesn't have such overhead. The article is divided in the following sections: Deploying a TKG cluster with built-in Calico Deploying BIG-IP in TKG with NSX-T Configuring BIG-IP and TKG to peer with Calico (BGP) Configuring BIG-IP to handle Kubernetes workloads Verifying the resulting configuration Alternative topologies and multi-tenancy considerations Closing notes All the configuration files referenced in this blog can be found in the repository https://github.com/f5devcentral/f5-bd-tanzu-tkg-bigip Deploying a TKG cluster with built-in Calico Deploying a TKG cluster is as simple as running kubectl with a definition like the following one: Note that the only thing required to choose between using Antrea or Calico is to specify the cni: name: field. It is perfectly fine running clusters with different CNIs in the same environment. At time of this writing (H1 2021) Calico v3.11 is the version included in Kubernetes v1.18. As you can see from the definition above we will be creating a small cluster for testing purposes. This can be scaled-up/down later on as desired by re-applying an updated TKG cluster definition. Deploying BIG-IP in TKG with NSX-T When a TKG cluster is deployed in NSX-T the Tanzu Kubernetes Grid Service automatically creates the necessary networking configuration. This includes amongst other: creating a T1 Gateway (also known as DLR) named vnet-domain-c8:<uuid>-<namespace>-<name>-vnet-rtr and a subnet named vnet-domain-c8:<uuid>-<namespace>-<name>-vnet-0 where Kubernetes nodes are attached. This is shown in the next screenshot where we can see that the subnet is using the range 10.244.1.113/28. The BIG-IPs will be part of this network as if it was another Kubernetes node. Additionally, we will create another segment named VIP in the same T1 DLR to keep all TKG resources under the same network leaf (this can be customized). As the name suggests this VIP segment is used for exposing the Ingress services implemented in the BIG-IPs. This is shown in the next Node's view figure. The BIG-IPs will additionally have the regular management and HA segments which are not shown for clarity. Following regular BIG-IP rules, these segments should be outside the data plane path and the HA segment doesn't need to be connected to any T1. The IP addresses used by the BIG-IP's in the Node's segments 10.244.1.{124,125,126} will need to be allocated in NSX-T so when scaling the TKG cluster they are not used by the Kubernetes nodes. This is done in the next figure. This is done by first login in the NSX-T manager > Networking > IP Address Pools where we will find the subnet allocated to our tkg2-tkg2 cluster. In this screen we can obtain the API path to operate with it. In the figure we show the use of POSTMAN to make the IP address allocation with an API request. More precisely a PUT policy/api/v1/<API path>/ip-allocations/<name of allocation> request indicating in the body the desired IP to allocate. In this link at code.vmware.com you can find the full details of this API call. The names of the IP allocations are not relevant. In this case the names I chose are: bigip-floating, bigip1 and bigip2 respectively. Configuring BIG-IP and TKG to peer with Calico (BGP) In the BIG-IP we will configure the VLANs and Self-IPs normally, the only additional consideration is that since we are going to use Calico we have to allow BGP communication (TCP port 179) on the TKG's self-IPs port lock down settings (non-floating only) as shown next. Next we will enable BGP in the Networking -> Route Domains -> 0 (existing) as shown next. At this point we can configure BGP using the imish command line that brings us access to dynamic routing protocols configuration: In the figure above it is shown the configuration for BIG-IP1. Only the router-id needs to be changed to apply it in BIG-IP2. To finish the Calico configuration we have to instruct the TKG cluster to peer with the BIG-IPs, this is done with the following Kubernetes resources: kubectl apply -f - <<EOF kind: BGPPeer apiVersion: crd.projectcalico.org/v1 metadata: name: bigip1 spec: peerIP: 10.244.1.125 asNumber: 64512 --- kind: BGPPeer apiVersion: crd.projectcalico.org/v1 metadata: name: bigip2 spec: peerIP: 10.244.1.126 asNumber: 64512 EOF As you might have noticed we are not using BGP passwords this is because Calico v3.17+ is needed for this feature. In any case the Node's network is protected by default by TKG's firewall rules in the T1 Gateway. Finally we will verify that all the routes are advertised as expected: The above verification has to be done in both BIG-IPs. Note from the above verification that it is expected to see for SNAT's range the nexthop 0.0.0.0. In the verification we can see a /26 prefix for each Node (these prefixes are created by Calico on demand when more PODs are created) and an /24 prefix for the SNAT range. These are shown next as a picture. The network 100.128.0.0/24 has been chosen semi-arbitrarily. It is a range after POD's range that we indicated on cluster creation. This range will be only seen within the iBGP mesh (the PODs). It is best to use a range not used at all within NSX-T to avoid any possible IP range clashes. It is worth to remark that the SNAT Pool range will be used for VIP to POD traffic whilst for health monitoring the BIG-IP will use the Self-IPs in the Kubernetes Node's segment. Configuring BIG-IP to handle Kubernetes workloads BIG-IP plugs into Kubernetes API by means of using Controller Ingress Services (CIS), We deploy one CIS POD per BIG-IP. Each CIS watches Kubernetes events and when a configuration is applied or Deployment scaling occurs it will update BIG-IP's configuration. CIS works with BIG-IP appliances, chasis or Virtual Edition and exposes BIG-IP's advanced services in the Kubernetes API using standard Ingress resources as well as Custom Resource Definitions (CRDs). Each CIS instance works independently on each BIG-IP. This is good for redundancy purposes but at the same time it makes the two BIG-IPs believe that they are out of sync of each other because the CIS instances update the tkg partition of each BIG-IP independently. This behavior is only cosmetic. The BIG-IP's cluster failover mechanisms (up to 8 BIG-IPs) are independent of this. The next steps are to configure CIS in Tanzu Kubernetes Grid , which is the same as with any regular Kubernetes with Calico. We will install CIS by means of using the Helm installer kubectl create secret generic bigip1-login -n kube-system --from-literal=username=admin --from-literal=password=<password> kubectl create secret generic bigip2-login -n kube-system --from-literal=username=admin --from-literal=password=<password> Add the CIS chart repository in Helm using following command: helm repo add f5-stable https://f5networks.github.io/charts/stable Create values-bigip<unit>.yaml for each BIG-IP as follows: bigip_login_secret: bigip1-login rbac: create: true serviceAccount: create: true name: k8s-bigip1-ctlr # This namespace is where the Controller lives; namespace: kube-system args: # See https://clouddocs.f5.com/containers/latest/userguide/config-parameters.html # NOTE: helm has difficulty with values using `-`; `_` are used for naming # and are replaced with `-` during rendering. bigip_url: 192.168.200.14 bigip_partition: tkg default_ingress_ip: 10.106.32.100 # Use the following settings if you want to restrict CIS to specific namespaces # namespace: # namespace_label: pool_member_type: cluster # Trust default BIG-IP's self-signed TLS certificate insecure: true # Force using the SNAT pool override-as3-declaration: kube-system/bigip-snatpool log-as3-response: true image: # Use the tag to target a specific version of the Controller user: f5networks repo: k8s-bigip-ctlr pullPolicy: Always resources: {} version: latest Note that in the above values file the following values are per BIG-IP unit: Login's Secret ServiceAccount Management IP The values files above indicate that we are going to use two resources: A partition named "tkg" in BIG-IP An AS3 override ConfigMap to inject custom BIG-IP configurations on top of a regular Ingress declaration. In this case we use this feature to force all the traffic to use a SNATPool. Next, create the BIG-IP partition indicated in the Helm values file: root@(bigip1)(cfg-sync In Sync)(Standby)(/Common)(tmos)# create auth partition tkg root@(bigip1)(cfg-sync Changes Pending)(Standby)(/Common)(tmos)# run cm config-sync to-group Create a ConfigMap to apply our custom SNATPool: kubectl apply -n kube-system -f bigip-snatpool.yaml Do a Helm chart installation for each BIG-IP unit, following the next pattern: helm install -n kube-system -f values-bigip<unit>.yaml --name-template bigip<unit> f5-stable/f5-bigip-ctlr In a 2 BIG-IP cluster that is: helm install -n kube-system -f values-bigip1.yaml --name-template bigip1 f5-stable/f5-bigip-ctlr helm install -n kube-system -f values-bigip2.yaml --name-template bigip2 f5-stable/f5-bigip-ctlr which will result in the following for each BIG-IP: $ helm -n kube-system ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION bigip1 kube-system 1 2021-06-09 15:19:20.651138 +0200 CEST deployed f5-bigip-ctlr bigip2 kube-system 1 2021-06-10 10:01:57.902215 +0200 CEST deployed f5-bigip-ctlr Configuring a Kubernetes Ingress Service We are ready to deploy Ingress services. In this case we will deploy a single Ingress, named cafe.tkg.bd.f5.com which will perform TLS termination and will send the traffic to the applications tea and coffee depending on the URL requested. This is exhibit in the next figure: This is deployed with the following commands: kubectl create ns cafe kubectl apply -n cafe -f cafe-rbac.yaml kubectl apply -n cafe -f cafe-svcs.yaml kubectl apply -n cafe -f cafe-ingress.yaml We are going to describe the Ingress definition cafe-ingress.yaml: CIS supports a wide range of annotations for customizing Ingress services. You can find detailed information in the supported annotations reference page but when these are large the Ingress definitions might become not easy to mantain. BIG-IP's solution to Ingress resource's limited schema capabilities is the use of the AS3 override ConfigMap mechanism. In our configuration we are using this mechanism to create a SNAT Pool that we apply to the CIS's default Ingress VIP. This is shown next: Given that AS3 Override customizations are applied outside the Ingress definitions, by means of an overlay ConfigMap, we avoid having Ingress definitions with huge annotation sections. In this blog we have implemented the healtchchecks with annotations but these could have been implemented with the AS3 Override ConfigMap mechanism as well. The AS3 Override ConfigMap can be used to define any advanced service configuration which is possible with any module of BIG-IP, not only LTM. Please check this link for more information about AS3 automation. Verifying the resulting configuration At this point if we should see the following Kubernetes resources: And in the BIG-IP UI we will see the objects shown next in the Network Map section: But in order to reach the VIPs we need to add an NSX-T firewall rule in TKG's T1 gateway. This is shown next: After the rule has been applied we can run a curl command to perform an end to end validation: Alternative topologies and multi-tenancy considerations In this blog post we have shown a topology where each cluster and its is contained within the scope of a T1 Gateway. A single BIG-IP cluster can be used for several clusters just using more interfaces. We could also use a shared VIP network directly connected in the T0 Gateway as shown next. Note that in the above example there will be at least one CIS POD for each BIG-IP/cluster combination. Notice that we mean "at least one" CIS. This is because it is also possible to have multiple CIS per TKG cluster. CIS can be configured to listen a specific set of namespaces (possibly selected using labels) and owning a specific partition in the BIG-IP. This is shown next. Closing notes Before deleting a TKG cluster we should delete the NSX-T created resources when integrating the BIG-IP. These are: The TKG's NSX-T segments from the BIG-IPs (disconnecting them is not enough). The IP allocations. The T1's Gateway Firewall Rules if these have been created. We have seen how easy is to use TKG's Calico CNI and take advantage of its reduced overlay overhead. We've also seen how to configure a BIG-IP cluster in TKG to provide a simpler, higher-performance single-tier Ingress Controller. We have only shown the use of CIS with BIG-IP's LTM load balancing module and the standard Ingress resource type. We've also seen how to extend Ingress resource's limited capabilities in a manageable fashion by using the AS3 override mechanism while also reducing the annotations required. It is also worth to remark the additional CRDs that CIS provides besides the standard Ingress resource type. The possibilities are limitless: any BIG-IP configuration or module that you are used to use for VMs or baremetal workloads can be applied to containers through CIS. We hope that you enjoyed this blog. We look forward to your comments.2.5KViews2likes0CommentsContainers: plug-and-play code in DevOps world - part 2
Related articles: DevOps Explained to the Layman Containers: plug-and-play code in DevOps world - part 1 Quick Intro Inpart 1, I explained containers at a very high level and mentioned that Docker was the most popular container platform. I also added that containers are tiny isolated environments within the same Linux host using the same Linux kernel and it is so lightweight that it packs only the libraries and dependencies just enough to get your application running. This is good because even for very distinct applications that require a certain Linux distro to run or different libraries, there should be no problem at all. If you're a Linux guy like me you'd probably want to know that the mostpopular container platform (docker) usesdockerdas the front-line daemon: root@albuquerque-docker:~# ps aux | grep dockerd root 753 0.1 0.4 1865588 74692 ? Ssl Mar19 4:39 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock This is what we're going to do here: Running my hello world app in traditional way (just a simple hello world!) Containerising my hello world app! (how to pack your application into a Docker image!) Quick note about image layers (brief note about Docker image layers!) Running our containerised app (we run the same hello worldapp but now within aDocker container) Uploading my app to online Registry and Retrieving it(we now upload our app to DockerHub so we can pull it from anywhere) How do we manage multiple container images (what do we do if our application is so big that we've got lots of containers?) Running my hello world app in traditional way There is no mystery running an application (or a component of an application) in the traditional way. We've got our physical or virtual machine with an OS installed and you just run it: root@albuquerque-docker:~# cat hello.py #!/usr/bin/python3 print('hello, Rodrigo!') root@albuquerque-docker:~# ./hello.py hello, Rodrigo! Containerising my hello world app! Here I'm going to show you how you can containerise your application and it's best if you follow along with me. First,install Docker. Once installed the command you'll use is alwaysdocker <something>ok? In DevOps world things are usually done in a declarative manner, i.e. you tell docker what you want to do and you don't worry much about the how. With that in mind, by default we can tell Docker in its default configuration file (Dockerfile) about the application you'd like it to pack (pack = creatingan image): root@albuquerque-docker:~# cat Dockerfile FROM ubuntu:latest RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y ADD hello.py / CMD [ "./hello.py" ] root@albuquerque-docker:~# FROM: tell docker what is your base image (don't worry, it automatically downloads the image fromDockerhubif your image is not locally installed) RUN: Any command typed in here is executed and becomes part of base image ADD: copies source file (hello.py) to a directory you pick inside your container (/ in this case here) CMD: Any command typed in here is executed after container is already running So, in above configuration we're telling Docker to build an image to do the following: Install ubuntu Linux as our base image (this is not the whole OS, just the bare minimum) Update and upgrade all packages installed and install python3 Add our hello.py script from current directory to / directory inside the container Run it Exit, because the only task it had (running our script) has been completed Now we execute this command to build the image based on ourDockerfile: Note: notice I didn't specify Dockerfile in my command below. That's because it's the default filename so I just omitted it. root@albuquerque-docker:~# docker build -t hello-world-rodrigo . Sending build context to Docker daemon 607MB Step 1/4 : FROM ubuntu:latest ---> 94e814e2efa8 Step 2/4 : RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y ---> Running in a63919569292 Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] . . <omitted for brevity> . Reading state information... Calculating upgrade... The following packages will be upgraded: apt libapt-pkg5.0 libseccomp2 libsystemd0 libudev1 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 2268 kB of archives. After this operation, 15.4 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libudev1 amd64 237-3ubuntu10.15 [54.2 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libapt-pkg5.0 amd64 1.6.10 [805 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libseccomp2 amd64 2.3.1-2.1ubuntu4.1 [39.1 kB] Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 apt amd64 1.6.10 [1165 kB] Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsystemd0 amd64 237-3ubuntu10.15 [205 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 2268 kB in 3s (862 kB/s) (Reading database ... 4039 files and directories currently installed.) Preparing to unpack .../libudev1_237-3ubuntu10.15_amd64.deb ... . . <omitted for brevity> . Suggested packages: python3-doc python3-tk python3-venv python3.6-venv python3.6-doc binutils binfmt-support readline-doc The following NEW packages will be installed: file libexpat1 libmagic-mgc libmagic1 libmpdec2 libpython3-stdlib libpython3.6-minimal libpython3.6-stdlib libreadline7 libsqlite3-0 libssl1.1 mime-support python3 python3-minimal python3.6 python3.6-minimal readline-common xz-utils 0 upgraded, 18 newly installed, 0 to remove and 0 not upgraded. Need to get 6477 kB of archives. After this operation, 33.5 MB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl1.1 amd64 1.1.0g-2ubuntu4.3 [1130 kB] . . <omitted for brevity> . Setting up libpython3-stdlib:amd64 (3.6.7-1~18.04) ... Setting up python3 (3.6.7-1~18.04) ... running python rtupdate hooks for python3.6... running python post-rtupdate hooks for python3.6... Processing triggers for libc-bin (2.27-3ubuntu1) ... Removing intermediate container a63919569292 ---> 6d564b46521d Step 3/4 : ADD hello.py / ---> a936bffc4f17 Step 4/4 : CMD [ "./hello.py" ] ---> Running in bea77d51f830 Removing intermediate container bea77d51f830 ---> e6e4f99ed9f3 Successfully built e6e4f99ed9f3 Successfully tagged hello-world-rodrigo:latest That's it. You've now packed your application into a docker image! We can now list our images to confirm our image is there: root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world-rodrigo latest e6e4f99ed9f3 2 minutes ago 155MB ubuntu latest 94e814e2efa8 2 minutes ago 88.9MB root@albuquerque-docker:~# Note that Ubuntu image was also installed as it is the base image where our app runs. Quick note about image Layers Notice that Docker uses layers to be more efficient and they're reused among containers in the same Host: root@albuquerque-docker:~# docker inspect hello-world-rodrigo | grep Layers -A 8 "Layers": [ "sha256:762d8e1a60542b83df67c13ec0d75517e5104dee84d8aa7fe5401113f89854d9", "sha256:e45cfbc98a505924878945fdb23138b8be5d2fbe8836c6a5ab1ac31afd28aa69", "sha256:d60e01b37e74f12aa90456c74e161f3a3e7c690b056c2974407c9e1f4c51d25b", "sha256:b57c79f4a9f3f7e87b38c17ab61a55428d3391e417acaa5f2f761c0e7e3af409", "sha256:51bedea20e25171f7a6fb32fdba24cce322be0d1a68eab7e149f5a7ee320290d", "sha256:b4cfcee2534584d181cbedbf25a5e9daa742a6306c207aec31fc3a8197606565" ] }, You can think of layers roughly like first layer is bare bones base OS for example, then second one would be a subsequent modification (e.g. installing python3), and so on. The idea is to share layers (read-only) with different containers so we don't need to create a copy of the same layer. Just make sure you understand that what we're sharing here is a read-only image. Anything you write on top of that, docker creates another layer! That's the magic! Running our Containerised App Lastly, we run our image with our hello world app: root@albuquerque-docker:~# docker run hello-world-rodrigo hello, Rodrigo! root@albuquerque-docker:~# As I said before, our container exited and that's because the only task assigned to the container was to run our script so by default it exits. We can confirm there is no container running: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES root@albuquerque-docker:~# If you want to run it in daemon mode there is an option called -d but you're only supposed to run this option if you're really going to run a daemon. Let me useNGINXbecause our hello-world image is not suitable for daemon mode: root@albuquerque-docker:~# docker run -d nginx Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx f7e2b70d04ae: Pull complete 08dd01e3f3ac: Pull complete d9ef3a1eb792: Pull complete Digest: sha256:98efe605f61725fd817ea69521b0eeb32bef007af0e3d0aeb6258c6e6fe7fc1a Status: Downloaded newer image for nginx:latest c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c Yes, you can run docker run command and it will download the image and run the container for you. Let's just confirm our container didn't exit and it is still there: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c97d363a1cc2 nginx "nginx -g 'daemon of…" 5 seconds ago Up 4 seconds 80/tcp amazing_lalande Let's confirm we can reach NGINX inside the container. First we check container's locally assigned IP address: root@albuquerque-docker:~# docker inspect c97d363a1cc2 | grep IPAdd "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAddress": "172.17.0.2", Now we confirm we have NGINX running inside a docker container: root@albuquerque-docker:~# curl http://172.17.0.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> At the moment, my NGINX server is not reachable outside of my host machine (172.16.199.57 is the external IP address of our container's host machine). rodrigo@ubuntu:~$ curl http://172.16.199.57 curl: (7) Failed to connect to 172.16.199.57 port 80: Connection refused To solve this, just add -p flag like this: -p <port host will listen for external connections>:<port our container is listening> Let's delete our our NGINX container first: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c97d363a1cc2 nginx "nginx -g 'daemon of…" 8 minutes ago Up 8 minutes 80/tcp amazing_lalande root@albuquerque-docker:~# docker rm c97d363a1cc2 Error response from daemon: You cannot remove a running container c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c. Stop the container before attempting removal or force remove root@albuquerque-docker:~# docker stop c97d363a1cc2 c97d363a1cc2 root@albuquerque-docker:~# docker rm c97d363a1cc2 c97d363a1cc2 root@albuquerque-docker:~# docker run -d -p 80:80 nginx a8b0454bae36e52f3bdafe4d21eea2f257895c9ea7ca93542b760d7ef89bdd7f root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a8b0454bae36 nginx "nginx -g 'daemon of…" 7 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp thirsty_poitras Now, let me reach it from an external host: rodrigo@ubuntu:~$ curl http://172.16.199.57 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> Uploading my App to online Registry and Retrieving it You can also upload your containerised application to an online registry such asdockerhubwith docker push command. We first need to create an accountdockerhuband then a repository: Because my username isdigofarias, my hello-world-rodrigo imagewill actually have to be named locally as digofarias/hello-world-rodrigo. Let's list our images: root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 94e814e2efa8 8 minutes ago 88.9MB hello-world-rodrigo latest e6e4f99ed9f3 8 minutes ago 155MB If I upload the image this way, it won't work so I need to rename it todigofarias/hello-world-rodrigolike this: root@albuquerque-docker:~# docker tag hello-world-rodrigo:latest digofarias/hello-world-rodrigo:latest root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world-rodrigo latest e6e4f99ed9f3 9 minutes ago 155MB digofarias/hello-world-rodrigo latest e6e4f99ed9f3 9 minutes ago 155MB We can now login to our newly created account: root@albuquerque-docker:~# docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: digofarias Password: ********** WARNING! Your password will be stored unencrypted in /home/rodrigo/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded Lastly, we push our code toDockerHub: root@albuquerque-docker:~# docker push digofarias/hello-world-rodrigo The push refers to repository [docker.io/digofarias/hello-world-rodrigo] b4cfcee25345: Pushed 51bedea20e25: Pushed b57c79f4a9f3: Pushed d60e01b37e74: Pushed e45cfbc98a50: Pushed 762d8e1a6054: Pushed latest: digest: sha256:b69a5fd119c8e9171665231a0c1b40ebe98fd79457ede93f45d63ec1b17e60b8 size: 1569 If you go to any other machine connected to the Internet with Docker installed you can run my hello-world app: root@albuquerque-docker:~# docker run digofarias/hello-world-rodrigo hello, Rodrigo! You don't need to worry about dependencies or anything else. If it worked properly in your machine, it should also work anywhere else as the environment inside the container should be the same. In real world, you'd probably be uploading just a component of your code and your real application could be comprised of lots of containers that can potentially communicate with each other via an API. How do we manage multiple container images? Remember that in the real-world we might need to create multiple components (each inside of its container) and we'll have an ecosystem of containers that eventually make up our application or service. As I said inpart 1, in order to manage this ecosystem we typically use a container orchestrator. Currently there are a couple of them like Docker swarm but Kubernetes is the most popular one. Kubernetes is a topic for a whole new article or many articles but you typically declare the container images to a Kubernetes deployment file and it downloads, installs, run and monitors the whole ecosystem (i.e. your application) for you. Just remember that a container is typically just one component of your application that communicates with other components/containers via an API.1.4KViews0likes0CommentsF5 Friday: Routing in Red Hat OpenShift Container Platform with F5 Just Got Easier
Scaling applications with container-based clusters is on the rise. Whether as part of a private cloud implementation or just part of an effort to modernize the delivery environment, we continue to see applications – traditional and microservices-based – being deployed and scaled within containerized environments with platforms like Red Hat OpenShift. And while the container platforms themselves do an excellent job of scaling apps and services out and back-in by increasing and reducing the number of containers available to respond to requests, that doesn’t address the question of how requests get to the cluster in the first place. The answer usually (almost always but Heisenberg frowns on such a high degree of certainty) lies in routing, upstream from the container cluster. As the OpenShift documentation states quite well, “The OpenShift Container Platform router is the ingress point for all external traffic destined for services in your OpenShift Container Platform installation.” This is the general architectural pattern for all containerized app infrastructures, where an upstream proxy is responsible for scale by routing requests across a cluster. Some patterns insert yet another layer of local scalability within the cluster, but all require the services of an upstream proxy capable of routing requests to the apps distributed across the cluster. That generally requires some networking magic, which is often seen as one of the top barriers to container adoption. ClusterHQ’s “Container Market Adoption Survey” report released in mid-2016 found that networking ranked second, after storage, as one of the barriers to deploying containers. Which is why it’s important to simplify the required networking whenever possible. Routing is one of the functions that necessarily relies on networking. In the case of OpenShift, that networking revolves around an SDN overlay network (VXLAN). Red Hat has natively supported F5 as a router (since OpenShift Container Platform 3.0.2) but in the past deployment has required a ramp node the cluster to act as a gateway between the BIG-IP and a pod. This was necessary to enable VXLAN/VLAN bridging but added weight to the deployment in the form of additional components (the ramp node) and networking configuration. While a workable solution, the ramp node quickly becomes a bottleneck that can impede performance, making it less than ideal. The release of OpenShift Container Platform 3.4 contained somekey benefits via an updated integration of F5. This includes theremoval of the need for a ramp node, and theaddition of BIG-IP as a VXLAN VTEP endpoint. That means the BIG-IP now has direct access to the Pods running inside OpenShift.Now, when a container is launched or killed, rather than update the ramp node the update goes directly to the F5 BIG-IP via ourREST API. This updated integration results in a simpler architecture (and easier deployment) that improves performance of apps served from the cluster and scalability of the overall architecture.It also offers three key benefits. First, fewer hops are required. Elimination of the ramp node means no ramp node induced bottlenecks. Second, BIG-IP has direct access for health monitoring the pods. Lastly, the integration also allows for app/page routing (L7 policy steering) by inspecting HTTP headers. Until now, Red Hat has owned the integration of F5 into OpenShift, but with this latest release we’ve taken on responsibility for maintaining and supporting the code. The new integration is available now, and you can learn more about how to use an F5 BIG-IP as an OpenShift router in its documentation.674Views0likes2CommentsLoad Balancing versus Application Routing
As the lines between DevOps and NetOps continue to blur thanks to the highly distributed models of modern application architectures, there rises a need to understand the difference between load balancing and application routing. These are not the same thing, even though they might be provided by the same service. Load balancing is designed to provide availability through horizontal scale. To scale an application, a load balancer distributes requests across a pool (farm, cluster, whatevs) of duplicated applications (or services). The decision on which pool member gets to respond to a request is based on an algorithm. That algorithm can be quite apathetic as to whether or the chosen pool member is capable of responding or it can be “smart” about its decision, factoring in response times, current load, and even weighting decisions based on all of the above. This is the most basic load balancing pattern in existence. It’s been the foundation for availability (scale and failover) since 1996. Load balancing of this kind is what we often (fondly) refer to as “dumb”. That’s because it’s almost always based on TCP (layer 4 of the OSI stack). Like honey badger, it don’t care about the application (or its protocols) at all. All it worries about is receiving a TCP connection request and matching it up with one of the members in the appropriate pool. It’s not necessarily efficient, but gosh darn it, it works and it works well. Systems have progressed to the point that purpose-built software designed to do nothing but load balancing can manage millions of connections simultaneously. It’s really quite amazing if you’re at all aware that back in the early 2000s most systems could only handle on the order of thousands of simultaneous requests. Now, application routing is something altogether different. First, it requires the system to care about the application and its protocols. That’s because in order to route an application request, the target must first be identified. This identification can be as simple as “what’s the host name” to something as complicated as “what’s the value of an element hidden somewhere in the payload in the form of a JSON key:value pair or XML element.” In between lies the most common application identifier – the URI. Application “routes” can be deduced from the URI by examining its path and extracting certain pieces. This is akin to routing in Express (one of the more popular node.js API frameworks). A URI path in the form of: /user/profile/xxxxx – where xxxxx is an actual user name or account number – can be split apart and used to “route” the request to a specific pool for load balancing or to a designated member (application/service instance). This happens at the “virtual server” construct of the load balancer using some sort of policy or code. Application routing occurs before the load balancing decision. In effect, application routing enables a single load balancer to distribute requests intelligently across multiple applications or services. If you consider modern microservices-based applications combined with APIs (URIs representing specific requests) you can see how this type of functionality becomes useful. An API can be represented as a single domain (api.example.com) to the client, but behind the scenes it is actually comprised of multiple applications or services that are scaled individually using a combination of application routing and load balancing. One of the reasons (aside from my pedantic nature) to understand the difference between application routing and load balancing is that the two are not interchangeable. Routing makes a decision on where to forward something – a packet, an application request, an approval in your business workflow. Load balancing distributes something (packets, requests, approval) across a set of resources designed to process that something. You really can’t (shouldn’t) substitute one for the other. But what it also means is that you have freedom to mix and match how these two interact with one another. You can, for example, use plain old load balancing (POLB) for ingress load balancing and then use application routing (layer 7) to distribute requests (inside a container cluster, perhaps). You can also switch that around and use application routing for ingress traffic, distributing it via POLB inside the application architecture. Load balancing and application routing can be layered, as well, to achieve specific goals with respect to availability and scale. I prefer to use application routing at the ingress because it enables greater variety and granularity in implementing both operational and application architectures more supportive of modern deployment patterns. The decision on where to use POLB vs application routing is largely based on application architecture and requirements. Scale can be achieved with both, though with differing levels of efficacy. That discussion is beyond the scope of today’s post, but there are trade-offs. It cannot be said often enough that the key to scaling applications today is about architectures, not algorithms. Understanding the differences of application routing and load balancing should provide a solid basis for designing highly scalable architectures.2.1KViews0likes5CommentsProxy Models in Container Environments
Inline, side-arm, reverse, and forward. These used to be the terms we used to describe the architectural placement of proxies in the network. Today, containers use some of the same terminology, but are introducing new ones. That’s an opportunity for me to extemporaneously expound* on my favorite of all topics: the proxy. One of the primary drivers of cloud (once we all got past the pipedream of cost containment) has been scalability. Scale has challenged agility (and sometimes won) in various surveys over the past five years as the number one benefit organizations seek by deploying apps in cloud computing environments. That’s in part because in a digital economy (in which we now operate), apps have become the digital equivalent of brick-and-mortar “open/closed” signs and the manifestation of digital customer assistance. Slow, unresponsive apps have the same effect as turning out the lights or understaffing the store. Apps need to be available and responsive to meet demand. Scale is the technical response to achieving that business goal. Cloud not only provides the ability to scale, but offers the ability to scale automatically. To do that requires a load balancer. Because that’s how we scale apps – with proxies that load balance traffic/requests. Containers are no different with respect to expectations around scale. Containers must scale – and scale automatically – and that means the use of load balancers (proxies). If you’re using native capabilities, you’re doing primitive load balancing based on TCP/UDP. Generally speaking, container-based proxy implementations aren’t fluent in HTTP or other application layer protocols and don’t offer capabilities beyond plain old load balancing (POLB). That’s often good enough, as container scale operates on a cloned, horizontal premise – to scale an app, add another copy and distribute requests across it. Layer 7 (HTTP) routing capabilities are found at the ingress (in ingress controllers and API gateways) and are used as much (or more) for app routing as they are to scale applications. In some cases, however, this is not enough. If you want (or need) more application-centric scale or the ability to insert additional services, you’ll graduate to more robust offerings that can provide programmability or application-centric scalability or both. To do that means plugging-in proxies. The container orchestration environment you’re working in largely determines the deployment model of the proxy in terms of whether it’s a reverse proxy or a forward proxy. Just to keep things interesting, there’s also a third model – sidecar – that is the foundation of scalability supported by emerging service mesh implementations. Reverse Proxy A reverse proxy is closest to a traditional model in which a virtual server accepts all incoming requests and distributes them across a pool (farm, cluster) of resources. There is one proxy per ‘application’. Any client that wants to connect to the application is instead connected to the proxy, which then chooses and forwards the request to an appropriate instance. If the green app wants to communicate with the blue app, it sends a request to the blue proxy, which determines which of the two instances of the blue app should respond to the request. In this model, the proxy is only concerned with the app it is managing. The blue proxy doesn’t care about the instances associated with the orange proxy, and vice-versa. Forward Proxy This mode more closely models that of an traditional outbound firewall. In this model, each container node has an associated proxy. If a client wants to connect to a particular application or service, it is instead connected to the proxy local to the container node where the client is running. The proxy then chooses an appropriate instance of that application and forwards the client's request. Both the orange and the blue app connect to the same proxy associated with its node. The proxy then determines which instance of the requested app instance should respond. In this model, every proxy must know about every application to ensure it can forward requests to the appropriate instance. Sidecar Proxy This mode is also referred to as a service mesh router. In this model, each container has its own proxy. If a client wants to connect to an application, it instead connects to the sidecar proxy, which chooses an appropriate instance of that application and forwards the client's request. This behavior is the same as a forward proxy model. The difference between a sidecar and forward proxy is that sidecar proxies do not need to modify the container orchestration environment. For example, in order to plug-in a forward proxy to k8s, you need both the proxy and a replacement for kube-proxy. Sidecar proxies do not require this modification because it is the app that automatically connects to its “sidecar” proxy instead of being routed through the proxy. Each model has its advantages and disadvantages. All three share a reliance on environmental data (telemetry and changes in configuration) as well as the need to integrate into the ecosystem. Some models are pre-determined by the environment you choose, so careful consideration as to future needs – service insertion, security, networking complexity – need to be evaluated before settling on a model. We’re still in early days with respect to containers and their growth in the enterprise. As they continue to stretch into production environments it’s important to understand the needs of the applications delivered by containerized environments and how their proxy models differ in implementation. *It was extemporaneous when I wrote it down. Now, not so much.342Views0likes0Comments