Service Discovery and authentication options for Kubernetes providers (EKS, AKS, GCP)

Summary

Hosted Kubernetes (K8s) providers use different services for authenticating to your hosted K8s cluster. A kubeconfig file will define your cluster's API server address and your authentication to this server. Uploading this kubeconfig file is how you enable service discovery from another platform like F5 Distributed Cloud (F5 XC).

What is Service Discovery in F5 XC?

Service Discovery in this case means something outside the cluster is querying the K8s API and learning the services and pods running inside the cluster. In our case, we want to send traffic into K8s via a platform that can publish our k8s services internally or publicly, and also apply security to the applications exposed. 

How do I configure k8s Service Discovery?

Follow these instructions to configure K8s Service Discovery in XC. You will notice that a kubeconfig file is typically required to be uploaded to XC. 

User authentication to AKS, EKS, and GKE

In K8s, User authentication happens outside of the cluster. There is no "User" resource in K8s - you cannot create a User with kubectl. Unlike ServiceAccounts (SAs), which are created inside K8s and whose authentication secrets exist inside K8s, a User is authenticated with a system outside of K8s. There are multiple authentication schemes, and certificates and OAuth are very common to see.

For this article, I'll review the typical authentication providers major providers: Azure's AKS, AWS's EKS, and Google's GKE.

Azure AKS

AKS authentication (authn) starts at Azure Active Directory, and authorization (authz) can be applied at AAD or k8s RBAC. For this article I am focused on authn (not authz).

If you follow official instructions to create an AKS cluster via CLI, you will run thaz aks get-credentials command to get the access credentials for an AKS cluster and merge them into your kubeconfig file. 

You will see that a client certificate and key is included in the User section of your kubeconfig file. This kubeconfig file can be uploaded to F5 XC for successful service discovery because it does not rely on any additional software to be on the kubectl client. 

(However, be cautious. Don't share the kubeconfig file further, since it contains credentials to AKS. Microsoft provides instructions to use Azure role-based access control (Azure RBAC) to control access to these credentials. These Azure roles let you define who can retrieve the kubeconfig file, and what permissions they then have within the cluster.)

AWS EKS

"Typical" EKS kubeconfig

The AWS EKS cluster authentication process requires the use of an AWS IAM identity, and typically relies on an additional software being installed on the kubectl client machine. Instead of credentials being configured directly in the kubeconfig file, either the aws cli or aws-iam-authenticator are used locally on the kubectl client to generate a token. This token is sent to the k8s master API server, which is verified against AWS IAM (see steps 1-2 in the image below).

Use a service account for EKS access

The AWS user that created the cluster will have system:masters permissions to your cluster, but if you want to add another user to your cluster, you must edit the aws-auth ConfigMap within Kubernetes. AWS has instructions to do this so I won't repeat them here, but the implications should be clear for what we are trying to do:

  1. You don't want a 3rd party authenticating to EKS as your primary user account. If you created the EKS cluster with your primary account, you should create a different user for EKS access. Create an AWS IAM user for this purpose, and then add this user to the aws-auth ConfigMap within Kubernetes.
  2. EKS requires authentication from the kubectl client. Instead of hard-coding credentials into the kubeconfig file, the kubeconfig file tells the client to run one of the two following commands that rely on the existing of additional software:
    aws-iam-authenticator token -i [clustername]
    or
    aws eks get-token --region [region] --cluster-name [cluster]

We can update our kubeconfig file so that it can use credentials directly in the kubeconfig file, so that we can upload this file to F5 XC for service discovery.

Creating a kubeconfig file that uses aws cli or aws-iam-authenticator with provided credentials

In the case of F5 XC, we can see from the instructions that "...,you must add AWS credentials in the kubeconfig file for successful service discovery". This means that to upload your kubeconfig to XC you will need to configure it to use the aws-iam-authenticator option, and provide your IAM

aws_access_key_id
and
aws_secret_access_key
as env vars in your kubeconfig. Example kubeconfig:

.... <everything else in kubeconfig file> ...
users:
- name: arn:aws:eks:us-east-1:<my_aws_acct_number>:cluster/<my_eks_cluster_name>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - <my_eks_cluster_name>
      command: aws-iam-authenticator
      env:
      - name: AWS_ACCESS_KEY_ID
        value: "<my_aws_access_key_id>"
      - name: AWS_SECRET_ACCESS_KEY
        value: "<my_aws_secret_access_key>"

I can use the above kubeconfig file for successful service discovery in XC. I did make some notes:

  • In my testing, the aws cli method did not work in XC. It appears that the XC node has aws-iam-authenticator installed, but not aws cli. 
  • Notice the apiVersion above is v1alpha1. This works when uploaded to F5 XC, but when testing this file on my local workstation, I had to ensure my kubectl client was downgraded to kubectl v 1.23.6 or lower. v1alpha1 has been deprecated in newer kubectl versions, but was required for me to authenticate with the aws-iam-authenticator method. This is not ideal. It appears that the XC node has a version of kubectl that allows for v1alpha1 auth, but we know this will be deprecated eventually.

These notes led me to asking: can I get a kubeconfig file to authenticate to AWS without 3rd party tools? Read on for how to do this, but first I'll cover Google's GKE.

Google GKE

Similar to EKS, GKE uses gcloud cli as a "credential helper" in the typical kubeconfig file for GKE cluster administration. This excellent article covers GKE authentication very well. 

This means we need to take a similar approach as we did for EKS, if we want to authenticate to the GKE API server from a location without gcloud installed. For GKE, I followed an approach I learnd here this blog post, but I'll summarize for you now:

  1. Create a Kubeconfig file that contains your cluster API server address and CA certificate. These are safe to commit in git because they do not need to be private.
  2. Create a Service Account in Google, generate a key for this service account and save it in JSON format. Export the variable GOOGLE_APPLICATION_CREDENTIALS so the value is the location of your .json file.
  3. Use kubectl commands. These should work because kubectl as a client is aware of the env var above.

This article from Google explains the community's requirement that provider-specific code is removed from the OSS codebase in v1.25 (as of my writing, v1.24 is the most recent release). This requires an update to kubectl that will remove existing functionality that allows kubectl to authenticate to GKE. Soon, you will need to use a new credential helper from Google called gke-gcloud-auth-plugin. Soon, you'll be relying on a different 3rd party plugin for GKE authentication. So our question will come up again, as it did earlier: Can't I just have a kubeconfig file that doesn't rely on a 3rd party library?

Creating a kubeconfig file that does not rely on other software

What if we need to upload our kubeconfig to a machine that does not have the aws cli, aws-iam-authenticator, or any other 3rd party credential helpers installed? Sometimes you just need a kubeconfig that doesn't rely on 3rd party software or their IAM service, and this is what ServiceAccounts can achieve.

I wrote an article with a script that you can use, focused on K8s SA's for service discovery with F5 XC. Since it's all documented in another article I'll summarize the steps here. 

  1. Create a Service Account (SA) with permissions to discover services. 
  2. Export the secret for that SA to use as an auth token.
  3. Create a kubeconfig file that targets your cluster and authenticates with your SA.
  4. This file can be uploaded to XC.

Summary

A kubeconfig file is relatively straightforward for most k8s admins. However, uploading a kubeconfig file into a hosted service, for the purpose of k8s service discovery, will sometimes require further analysis. Remember:

  1. Your kubeconfig file defines your authentication to the k8s API server. So use a service account (not your primary user account)
  2. If your kubeconfig file relies on a helper like aws cli, aws-iam-authenticator, or gcloud, you can probably work around this by editing your kubeconfig file to use a SA and token instead. This may be required if you upload the kubeconfig file to a hosted platform like F5 XC.

All this might seem complex at first, but after service discovery is configured the ability to expose your services anywhere - publicly or privately - is incredibly easy and powerful. Good luck and reach out with feedback!

Updated Jun 06, 2023
Version 4.0

Was this article helpful?

3 Comments

  • sorry shsingh , you are right. The link points to an article that is about to be published so it should work very soon. It's a shorter article and instructions to create a K8s SA, a role and clusterrolebinding, and then generate a kubeconfig that you can upload to XC.