Deploy an App into Kubernetes Using Advanced Application Services
Introduction
Welcome to week three of the Kubernetes and BIG-IP series. In the previous article we learned how easy it is to deploy complex applications into Kubernetes running on Google Container Engine (GKE). As you might imagine, that ease could quickly lead to large numbers of applications running in an environment. But what if you need application services on those applications? Suppose that you want a centralized TLS policy for all applications, including those deployed into Kubernetes? What if you plan to implement DDoS protection at an operational level, and not within the application? Suppose that your organization intends to deploy applications using more sophisticated approaches, such as blue/green deployments or canary releases made possible by iRules? Perhaps you need other advanced traffic management capabilities. If only there were a way to bring all of the power of advanced application delivery controllers into Kubernetes, then Kubernetes applications could have the same assurances that you give on-premises and cloud applications. Up until recently, blending BIG-IP with containers was impossible, but now that ability is available, and this article will walk you through it. This article walks through deploying an application with multiple instances then ties into a BIG-IP for application delivery.
Requirements
In order to perform the steps in this article, you will need a few things.
- Access to Google Cloud and familiarity with using it (see the previous article for details)
- A BIG-IP license
As long as you have the above two items, you are ready to go. The next section gives an overview of F5’s Container Connector.
Container Connector
Container Connector is a containerized application that you install into Kubernetes that enables a BIG-IP to control a pool of pods. Once configured, as pods are created and destroyed, the corresponding BIG-IP pool members are also created and destroyed. This allows the BIG-IP to manage traffic for the pods, while letting the developers continue to deploy applications into Kubernetes. In the next section you will walk through the deployment.
Deployment
Deployment falls roughly into three sections: BIG-IP, Container Connector, and the actual application.
Deploy BIG-IP
To Deploy BIG-IP in Google Cloud, go to the launcher page at https://console.cloud.google.com/launcher/search?q=f5. From there choose the “F5 BIG-IP ADC Best -BYOL” option.
Next, launch the BIG-IP.
The next page provides default settings for several virtual machine parameters. At the bottom of the page are some firewall defaults and a Deploy button. Click Deploy to deploy the BIG-IP.
It will take three or four minutes for the deployment to complete. Once the BIG-IP image boots, it will have a dynamic external IP address that changes on every reboot. In a real deployment we would take steps to obtain a static IP address but for this exercise, the external IP address is fine. Just be aware that the external address will change when the BIG-IP reboots. The next step is to set the admin password on the BIG-IP. To set the password, click on the SSH button.
You will see a message about Google Cloud trying to transfer keys to the VM.
After a few seconds you may see an error message.
Do nothing. Instead, wait for another 10 seconds or so and the SSH session will be established.
At the prompt, enter the command to modify the admin password.
modify auth password admin
You will be prompted for a new password and asked to confirm that password. Try to avoid characters that might create problems for BASH or other command shells. For example, avoid the bang (exclamation point) character and the question mark. For this exercise, I have changed the password to “nimda5873” and will use that password below. Close the SSH browser tab.
The final step is to log into the BIG-IP instance and license it. Click on the instance name.
The next page shows details about the instance, including its external IP address. In my case, the external IP address is 104.198.218.160. Make note of this address.
With the external IP address, log into the BIG-IP by entering the URL into the browser address bar.
https://external-ip-address:8443
For my instance, the URL looks like this.
https://104.198.218.160:8443
Your browser may show a warning. Log into the device using the password you set above, then provide a license. Your BIG-IP is now provisioned and licensed. The final step is to create a partition called kubernetes. The name is case sensitive. Accept the default parameters.
Note that there are no virtual servers defined yet.
You’re all done with the BIG-IP. In the next section we will install the Container Connector.
Deploy Container Connector
This section installs and configures the Container Connector software that controls the BIG-IP. First, create a cluster as described in the previous article. All of the following commands are typed into the Google Cloud Shell, as described in the previous article. Deploying Container Connector involves two steps. The first step installs the software and configures communication with the BIG-IP. The second step configures the software to interact with a particular Kubernetes service (app).
Install Container Connector Software
Allow the Google Cloud Shell to interact with Kubernetes.
gcloud container clusters get-credentials cluster-1 --zone us-central1-a
Next, create a Kubernetes secret that will hold the BIG-IP credentials in a secure fashion. Substitute your password for nimda5873 in the following command.
kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=admin --from-literal=password=nimda5873
Get a reference deployment file for the Container Connector.
wget http://clouddocs.f5.com/containers/v1/_downloads/f5-k8s-bigip-ctlr_image-secret.yaml
Edit the file to change parameters. This is a YAML file and is sensitive to column position. In other words, do not alter the whitespace in front of parameters.
- Change the bigip-to external IP address:8443, example:
"--bigip-url=104.198.218.160:8443”,
- Adjust the file to point to the new beta build (this is currently necessary but should be unnecessary soon):
image: "f5networks/k8s-bigip-ctlr:1.1.0-beta.1"
- If you want more detailed logs, place this command in the section with the bigip-url parameter.
"--log-level=DEBUG",
Save the edited file, then run the following command to install Container Connector.
kubectl create -f f5-k8s-bigip-ctlr_image-secret.yaml -n kube-system
You should see:
deployment "k8s-bigip-ctlr-deployment” created
The pod will be deployed within the kube-system namespace of Kubernetes. As a result, the pod is not normally visible, but you can monitor the status of all kube-system pods by typing the following command.
kubectl get $(kubectl get pods -o name -n kube-system | grep k8s-bigip-ctlr-deployment) -n kube-system -w
Within a 30 seconds or so, you should see the status of the pod as running. If the pod is crashing, you can see the logs with this command.
kubectl logs $(kubectl get pods -o name -n kube-system | grep k8s-bigip-ctlr-deployment) -n kube-system
If all is well, in the logs you will see that Container Connector has communicated with the BIG-IP and wrote no configuration (because no app has been deployed yet).
2017/06/08 23:21:59 [INFO] Wrote 0 Virtual Server configs
Next, we need to configure Container Connector to watch for an application.
Configure Container Connector to Watch for an Application
With Container Connector installed, we need to configure it to watch for an application. This step is done through a Kubernetes ConfigMap, which is a configuration file. You will have one ConfigMap per application.
First, download a reference ConfigMap
wget http://clouddocs.f5.com/containers/v1/_downloads/f5-resource-vs-example.configmap.yaml
As with the above file, we need to edit the file to make changes. Change the following in the above file.
- Set bindAddr to the internal (NOT external) IP address of the BIG-IP. My bindAddr line reads:
"bindAddr": “10.128.0.2"
- Note: Using the internal IP address may seem counterintuitive since the browser will connect to the virtual server using the external address. Google uses SNAT to remap the destination address from the external address to the internal address before the BIG-IP sees the traffic. If the BIG-IP has a virtual server expecting traffic to the external address, it will never see that IP destination and will refuse the connections.
- Change seviceName from myService to demo-app. This is the name of the Kubernetes service (app) to monitor.
- Change servicePort from 3000 to 80. This is the port of the app where BIG-IP will send requests.
Create the ConfigMap
kubectl create -f f5-resource-vs-example.configmap.yaml -n default
After the ConfigMap is successfully created, you should see:
configmap "k8s.vs" created
Your container connector is now fully configured, watching for a service named demo-app. The next section creates that service.
Deploy an App
Now all we need to do is deploy an app. The below command deploys a demo app listening on port 80 with two replicas.
kubectl run demo-app --replicas=2 --image f5devcentral/f5-demo-app --port=80
You can see the app running and that it has two replicas.
kubectl get pods
There is just one more step to deploy the app. We need to expose the pods as a service. A Kubernetes service is a component that encompasses the app regardless of which node has the app, or which pods are running. The service is the app boundary, while the node, IP, port, and pod can all change during the lifetime of the app.
kubectl expose deployment demo-app --port=80 --target-port=80 --type=NodePort
Look at the BIG-IP. In the Network Map of the kubernetes partition, you can see that a virtual server, pool, and nodes have been created. Yours should similar to this.
There are two things of note. First, all of the objects are in an unknown state. That is because health monitors have not been defined. In the interest of simplicity, health monitors are not covered in this article. The second thing you might notice is that there are three pool nodes, but only two pods. The reason is that the BIG-IP manages traffic to the nodes in the cluster, while the nodes themselves have a load balancer to balance between pods within a node. In the next article I will discuss the load balancer at the node level. For now, it is sufficient to know that the traffic now moves through the BIG-IP and it handles all of the balancing across nodes. To recap, there are three nodes in the cluster and that is what is listed on the BIG-IP. There is one more step before we can test this. Google has a firewall policy that by default does not allow port 80 traffic to anything, including the BIG-IP.
Update the Firewall Rules
The firewall needs to allow port 80 traffic to the BIG-IP. The simplest approach is to allow port 80 traffic to all external IP addresses. In our test environment we can do that, but in a production environment you would want to be more precise in which hosts are allowed to receive port 80 traffic.
gcloud beta compute firewall-rules create default-allow-http --allow tcp:80
Run the App
To see that this is actually working, point your browser to the external IP of the BIG-IP. For example, my URL is:
http://104.198.218.160
If all goes well, you will see the demo-app splash page.
Notice the Server IP and Server Port. Refresh the page to see the values change as the requests are balanced across the nodes. That is a different IP address than we have seen before. That’s because Kubernetes has a node load balancer (as noted above) that remaps the destination IP address and port to that expected by the container. The layers of virtualization consist of several IP addresses, but the key point to remember is that all traffic now for the application is going through the BIG-IP. That means that any advanced services are now available for this app. In front of the app you can put a Web Application Firewall, SSL offload, iRules, and anything else that can be placed on a BIG-IP. As the backend pods scale up and down and deployments change, the BIG-IP can still provide advanced application services.
Clean Up
Before shutting down this demonstration, some clean up is in order. As before, delete the cluster. You should also either stop or delete the BIG-IP virtual machine. Finally, remove the firewall rule we added to allow port 80 access to the BIG-IP.
gcloud compute firewall-rules delete default-allow-http
Conclusion
This article started with the question of how to give Kubernetes the advanced application delivery services necessary for production workloads in the real world. You have successfully deployed a demo application running on a cluster and delivered it through a BIG-IP, making available the power of iRules, SSL offload, and many other capabilities. With this approach, you can continue to leverage skills for network operations in real time for Kubernetes workloads. The ability to deliver applications and make operational decisions can continue to be decoupled from the development cycle, ensuring that applications can remain available at all times. In the next (and final) article of this series, we will explore how to gain visibility into the traffic flowing between pods.
Series Index
Deploy an App into Kubernetes in less than 24 Minutes
Deploy an App into Kubernetes Even Faster (Than Last Week)
Deploy an App into Kubernetes Using Advanced Application Services
- Jason_KeatingAltostratus
Hi, how do I use iRules and other advanced features of the F5 with this? via annotations? (the schema appears not to support iRules or Web Acceleration profiles, both of which I need)
- Jeff_Giroux_F5Ret. Employee
There are new docs for CIS. Please use these docs instead of the old version.
New CIS docs on Cloud Docs
- Start here for all F5 Cloud Docs - https://clouddocs.f5.com
- Then go here for Containers - https://clouddocs.f5.com/containers/latest/
Old CIS docs on Cloud Docs