What's Happening Inside my Kubernetes Cluster?
Introduction
This article series has taken us a long way. We started with an overview of Kubernetes. In the second week we deployed complex applications using Helm, visualized complex applications using Yipee.io. During the third week we enabled advanced application delivery services for all of our pods. In this fourth and final week, we are going to gain visibility into the components of the pod. Specifically, we are going to deploy a microservice application, consisting of multiple pods, and aggregate all of the logs into a single pane of glass using Splunk. You will be able to slice and dice the logs any number of ways to see exactly what is happening down at the pod level.
To accomplish visibility, we are going to do four things:
- Deploy a microservices application
- Configure Splunk Cloud
- Configure Kubernetes to send logs to Splunk Cloud
- Visualize the logs
Deploy a Microservices Application
As in previous articles, this article will take place using Google Cloud. Log into the Google Cloud Console and create a cluster. Once done, open a Google Cloud Shell session. Fortunately, Eberhard Wolff has already assembled a simple microservices application.
First, set the credentials.
gcloud container clusters get-credentials cluster-1 --zone us-central1-a
We simply need to download the shell script.
wget https://raw.githubusercontent.com/ewolff/microservice-kubernetes/master/microservice-kubernetes-demo/kubernetes-deploy.sh
Next, simply run the shell script. This mayl take several minutes to complete.
bash ./kubernetes-deploy.sh
Once finished, check to see that all of the pods are running. You will see the several pods that comprise the application. Many of the pods provide small services (microservices) to other pods.
kubectl get pods
If that looks good, find the external IP address of the Apache service. Note that the address may be pending for several minutes. Run the command until a real address is shown.
kubectl get svc apache
Put the external IP address into your browser. The simple application have several functioning components. Feel free to try each of the functions. Every click will generate logs that we can analyze later.
That was it. You now have a microsevices application running in Kubernetes. But which component is processing what traffic? If there are any slowdowns, which component is having problems?
Configure Splunk Cloud
Splunk is a set of tools for ingesting, processing, searching, and analyzing machine data. We are going to use it to analyze our application logs. Splunk comes in many forms, but for our purposes, the free trial of the cloud (hosted) service will work perfectly. Go to https://www.splunk.com/en_us/download.html then select the Free Cloud Trial.
Fill out the form. The form may take a while to process.
View the instance.
Finally, accept the terms.
You now have a Splunk instance for free for the next 15 days. Watch for an email from Splunk. Much of the Splunk and Kubernetes configuration steps are from http://jasonpoon.ca/2017/04/03/kubernetes-logging-with-splunk/.
When the email from Splunk arrives, click on the link. This is your private instance of Splunk Cloud that has a lot of sample records. To finish the configuration, first let Splunk know that you want to receive records from a Universal Forwarder, which is Splunk-speak for an external agent. In our case, we will be using the Universal Forwarder to forward container logs from Kubernetes. To configure Splunk, click to choose a default dashboard. Select Forwarders: Deployment.
You will be asked to set up forwarding. Click to enable forwarding. Click Enable.
Forwarding is configured. Next we need to download the Splunk credentials. Go back to the link supplied in the email, and click on the Universal Forwarder on the left pane.
Download Universal Forwarder credentials.
We need to get this file to the Google Cloud Shell. One way to do that is to create a bucket in Google Storage. On the Google Cloud page, click on the Storage Browser.
Create a transfer bucket. You will need to pick a name unique across Google. I chose mls-xfer.
After typing in the name, click Create.
Next, upload the credentials file from Splunk by clicking Upload Files.
That’s all we need from Splunk right now. The next step is to configure Kubernetes to send the log data to Splunk.
Configure Kubernetes to Send Logs to Spunk Cloud
In this section we will configure Kubernetes to send container logs to Splunk for visualization and analysis.
Go to Google Cloud Shell to confirm the Splunk credential file is visible. Substitute your bucket name for mls-xfer.
gsutil ls gs://mls-xfer
If you see the file, then you can copy it to the Google Cloud Shell. Again, use your bucket name. Note the trailing dot.
gsutil cp gs://mls-xfer/splunkclouduf.spl .
If successful, you will have the file in the Google Cloud Shell where you can extract it.
tar xvf ./splunkclouduf.spl
You should see the files being extracted.
splunkclouduf/default/outputs.conf
splunkclouduf/default/cacert.pem
splunkclouduf/default/server.pem
splunkclouduf/default/client.pem
splunkclouduf/default/limits.conf
Next we need to build a file to deploy the Splunk forwarder.
kubectl create configmap splunk-forwarder-config --from-file splunkclouduf/default/ --dry-run -o yaml > splunk-forwarder-config.yaml
Before using that file, we need to add some lines near the end of it, after the last certificate.
inputs.conf: | # watch all files in [monitor:///var/log/containers/*.log] # extract `host` from the first group in the filename host_regex = /var/log/containers/(.*)_.*_.*\.log # set source type to Kubernetes sourcetype = kubernetes
Spaces and columns are important here. The last few lines of my splunk-forwarder-config.yaml file look like this:
-----END ENCRYPTED PRIVATE KEY----- inputs.conf: | # watch all files in [monitor:///var/log/containers/*.log] # extract `host` from the first group in the filename host_regex = /var/log/containers/(.*)_.*_.*\.log # set source type to Kubernetes sourcetype = kubernetes kind: ConfigMap metadata: creationTimestamp: null name: splunk-forwarder-config
Create the configmap using the supplied file.
kubectl create -f splunk-forwarder-config.yaml
The next step is to create a daemonset, which is a container that runs on every node of the cluster. Copy and paste the below text into a file named splunk-forwarder-daemonset.yaml using vi or your favorite editor.
apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: splunk-forwarder-daemonset spec: template: metadata: labels: app: splunk-forwarder spec: containers: - name: splunkuf image: splunk/universalforwarder:6.5.2-monitor env: - name: SPLUNK_START_ARGS value: "--accept-license --answer-yes" - name: SPLUNK_USER value: root volumeMounts: - mountPath: /var/run/docker.sock readOnly: true name: docker-socket - mountPath: /var/lib/docker/containers readOnly: true name: container-logs - mountPath: /opt/splunk/etc/apps/splunkclouduf/default name: splunk-config - mountPath: /var/log/containers readOnly: true name: pod-logs volumes: - name: docker-socket hostPath: path: /var/run/docker.sock - name: container-logs hostPath: path: /var/lib/docker/containers - name: pod-logs hostPath: path: /var/log/containers - name: splunk-config configMap: name: splunk-forwarder-config
Finally, create the daemonset.
kubectl create -f splunk-forwarder-daemonset.yaml
The microservice app should be sending logs right now to your Splunk Cloud instance. The logs are updated every 15 minutes, so it might be a while before the entries show in Splunk. For now, explore the micro services ordering application so that log entries are generated. Feel free also to explore Splunk.
Visualize the Logs
Now that logs are appearing in Splunk, go to the link on the email from Splunk. You should see a dashboard with long entries representing activity in the order processing application. Immediately on the dashboard you can see several options. You can drill down the forwarders by status.
Further down the page you can see a list of the forwarding instances, along with statistics.
Below that is a graph of activity across the instances.
Explore the Splunk application. The combination of logs from several pods provides insight into the activity among the containers.
Clean Up
When you are done exploring, cleaning up is a breeze. Just delete the cluster.
Conclusion
This article series helped scratch the surface of what is possible with Kubernetes. Even if you had no Kubernetes knowledge, the first article gave an overview and deployed a simple application. The second article introduced two ways to automate deployments. The third article showed how to integrate application delivery services. This article closed the loop by demonstrating application monitoring capabilities. You are now in a position to have a meaningful conversation about Kubernetes with just about anyone, even experts. Kubernetes is growing and changing quickly, and welcome to the world of containers.
Series Index
Deploy an App into Kubernetes in less than 24 Minutes
Deploy an App into Kubernetes Even Faster (Than Last Week)
Deploy an App into Kubernetes Using Advanced Application Services