on
31-Aug-2022
05:00
- edited on
05-Jun-2023
23:00
by
JimmyPackets
Hosted Kubernetes (K8s) providers use different services for authenticating to your hosted K8s cluster. A kubeconfig file will define your cluster's API server address and your authentication to this server. Uploading this kubeconfig file is how you enable service discovery from another platform like F5 Distributed Cloud (F5 XC).
Service Discovery in this case means something outside the cluster is querying the K8s API and learning the services and pods running inside the cluster. In our case, we want to send traffic into K8s via a platform that can publish our k8s services internally or publicly, and also apply security to the applications exposed.
Follow these instructions to configure K8s Service Discovery in XC. You will notice that a kubeconfig file is typically required to be uploaded to XC.
In K8s, User authentication happens outside of the cluster. There is no "User" resource in K8s - you cannot create a User with kubectl. Unlike ServiceAccounts (SAs), which are created inside K8s and whose authentication secrets exist inside K8s, a User is authenticated with a system outside of K8s. There are multiple authentication schemes, and certificates and OAuth are very common to see.
For this article, I'll review the typical authentication providers major providers: Azure's AKS, AWS's EKS, and Google's GKE.
AKS authentication (authn) starts at Azure Active Directory, and authorization (authz) can be applied at AAD or k8s RBAC. For this article I am focused on authn (not authz).
If you follow official instructions to create an AKS cluster via CLI, you will run the az aks get-credentials command to get the access credentials for an AKS cluster and merge them into your kubeconfig file.
You will see that a client certificate and key is included in the User section of your kubeconfig file. This kubeconfig file can be uploaded to F5 XC for successful service discovery because it does not rely on any additional software to be on the kubectl client.
(However, be cautious. Don't share the kubeconfig file further, since it contains credentials to AKS. Microsoft provides instructions to use Azure role-based access control (Azure RBAC) to control access to these credentials. These Azure roles let you define who can retrieve the kubeconfig file, and what permissions they then have within the cluster.)
The AWS EKS cluster authentication process requires the use of an AWS IAM identity, and typically relies on an additional software being installed on the kubectl client machine. Instead of credentials being configured directly in the kubeconfig file, either the aws cli or aws-iam-authenticator are used locally on the kubectl client to generate a token. This token is sent to the k8s master API server, which is verified against AWS IAM (see steps 1-2 in the image below).
The AWS user that created the cluster will have system:masters permissions to your cluster, but if you want to add another user to your cluster, you must edit the aws-auth ConfigMap within Kubernetes. AWS has instructions to do this so I won't repeat them here, but the implications should be clear for what we are trying to do:
aws-auth ConfigMap within Kubernetes.
aws-iam-authenticator token -i [clustername]
oraws eks get-token --region [region] --cluster-name [cluster]
We can update our kubeconfig file so that it can use credentials directly in the kubeconfig file, so that we can upload this file to F5 XC for service discovery.
In the case of F5 XC, we can see from the instructions that "...,you must add AWS credentials in the kubeconfig file for successful service discovery". This means that to upload your kubeconfig to XC you will need to configure it to use the aws-iam-authenticator option, and provide your IAM
aws_access_key_id
and aws_secret_access_key
as env vars in your kubeconfig. Example kubeconfig:
.... <everything else in kubeconfig file> ...
users:
- name: arn:aws:eks:us-east-1:<my_aws_acct_number>:cluster/<my_eks_cluster_name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- <my_eks_cluster_name>
command: aws-iam-authenticator
env:
- name: AWS_ACCESS_KEY_ID
value: "<my_aws_access_key_id>"
- name: AWS_SECRET_ACCESS_KEY
value: "<my_aws_secret_access_key>"
I can use the above kubeconfig file for successful service discovery in XC. I did make some notes:
These notes led me to asking: can I get a kubeconfig file to authenticate to AWS without 3rd party tools? Read on for how to do this, but first I'll cover Google's GKE.
Similar to EKS, GKE uses gcloud cli as a "credential helper" in the typical kubeconfig file for GKE cluster administration. This excellent article covers GKE authentication very well.
This means we need to take a similar approach as we did for EKS, if we want to authenticate to the GKE API server from a location without gcloud installed. For GKE, I followed an approach I learnd here this blog post, but I'll summarize for you now:
GOOGLE_APPLICATION_CREDENTIALS so the value is the location of your .json file.
This article from Google explains the community's requirement that provider-specific code is removed from the OSS codebase in v1.25 (as of my writing, v1.24 is the most recent release). This requires an update to kubectl that will remove existing functionality that allows kubectl to authenticate to GKE. Soon, you will need to use a new credential helper from Google called gke-gcloud-auth-plugin. Soon, you'll be relying on a different 3rd party plugin for GKE authentication. So our question will come up again, as it did earlier: Can't I just have a kubeconfig file that doesn't rely on a 3rd party library?
What if we need to upload our kubeconfig to a machine that does not have the aws cli, aws-iam-authenticator, or any other 3rd party credential helpers installed? Sometimes you just need a kubeconfig that doesn't rely on 3rd party software or their IAM service, and this is what ServiceAccounts can achieve.
I wrote an article with a script that you can use, focused on K8s SA's for service discovery with F5 XC. Since it's all documented in another article I'll summarize the steps here.
A kubeconfig file is relatively straightforward for most k8s admins. However, uploading a kubeconfig file into a hosted service, for the purpose of k8s service discovery, will sometimes require further analysis. Remember:
All this might seem complex at first, but after service discovery is configured the ability to expose your services anywhere - publicly or privately - is incredibly easy and powerful. Good luck and reach out with feedback!
sorry @shsingh , you are right. The link points to an article that is about to be published so it should work very soon. It's a shorter article and instructions to create a K8s SA, a role and clusterrolebinding, and then generate a kubeconfig that you can upload to XC.