07-Sep-2022 05:00 - edited 01-Mar-2023 11:16
Kubernetes (K8s) service discovery is a feature within F5 Distributed Cloud Services (F5 XC) that allows you to discovery and publish applications in your K8s cluster via HTTP Load Balancers to sites or the Internet via F5 XC.
When configuring K8s service discovery in F5 XC, you can create a ServiceAccount (SA) with narrowly-scoped permissions and generate your own kubeconfig file to authenticate via token. This will achieve two things:
Accessing the K8s cluster API requires authentication, which can be configured following multiple authentication schemes. Commonly, at least two methods are used at once:
Hosted K8s providers, such as AKS, EKS, or GKE, commonly use an authentication plugin. Users will authenticate to the cluster via AzureAD, AWS IAM, or GKE IAM, respectively. For this to happen, the client (ie, kubectl) may need to provide an authentication token for this 3rd party auth service.
It is common for a third party library to be required on the kubectl client for this token generation. I've covered this in more depth in a previous article, but an admin of an EKS or GKE cluster may see that their kubeconfig file requires the kubectl client to execute the aws cli, the aws-iam-authenticator library, the gcloud cli, or the gke-gcloud-auth-plugin library.
There are two main problems here:
I'll quote the official documentation when suggesting an alternative:
"Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API."
For the reasons above, I suggest creating a Kubernetes ServiceAccount (SA) with a custom ClusterRole, extracting the default bearer token for authenticating as this SA, and creating a kubeconfig file to allow F5 XC to use this SA and ClusterRole in service discovery.
Let's configure service discovery in F5 XC to use a ServiceAccount by creating the K8s resources required and generating a new kubeconfig file.
A few minor points to consider if you copy/paste the commands below:
jqto extract some values from JSON results. If you don't have
jqinstalled, install it or do this manually.
CLUSTER_NAMEbelow, I've assumed that your kubeconfig file has the server we are targeting listed first, if you have multiple servers in your kubeconfig. You may want to change these lines if required.
NAMESPACE='kube-system' SA_NAME='xc-sa' SECRET_NAME='xc-sa-secret' kubectl create sa $SA_NAME -n $NAMESPACE kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: $SECRET_NAME namespace: $NAMESPACE annotations: kubernetes.io/service-account.name: $SA_NAME type: kubernetes.io/service-account-token EOF ##Now that we've created a ServiceAccount with a token to authenticate, let's collect the details of this auth token, along with our existing cluster details. CA_CRT=$(kubectl --namespace $NAMESPACE get secret/$SECRET_NAME -o json | jq -r '.data["ca.crt"]') TOKEN=$(kubectl get secret/$SECRET_NAME -n $NAMESPACE -o json | jq -r .data.token | base64 --decode ) SERVER=$(kubectl config view -o json | jq -r .clusters.cluster.server) CLUSTER_NAME=$(kubectl config view -o json | jq -r .clusters.name)
We'll create YAML files first, and then apply them. To get fancy, you can run
cat and direct output to
kubectl in a single command if you like. First, create the ClusterRole.
CLUSTER_ROLE_NAME='xc-service-discovery' cat << EOF > cluster-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: $CLUSTER_ROLE_NAME rules: - apiGroups: [""] resources: - services - endpoints - pods - nodes - nodes/proxy - namespaces verbs: ["get", "list", "watch"] EOF kubectl apply -f cluster-role.yaml
Now create the ClusterRoleBinding to bind the SA and ClusterRole.
cat <<EOF > cluster-role-binding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: $CLUSTER_ROLE_NAME roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: $CLUSTER_ROLE_NAME subjects: - kind: ServiceAccount name: $SA_NAME namespace: $NAMESPACE EOF kubectl apply -f cluster-role-binding.yaml
cat <<EOF > sa.kubeconfig --- apiVersion: v1 kind: Config clusters: - name: $CLUSTER_NAME cluster: certificate-authority-data: $CA_CRT server: $SERVER contexts: - name: $SA_NAME-$CLUSTER_NAME context: cluster: $CLUSTER_NAME user: $SA_NAME users: - name: $SA_NAME user: token: $TOKEN current-context: $SA_NAME-$CLUSTER_NAME EOF
Optionally, test this kubeconfig file by setting your $KUBECONFIG variable appropriately. Validate that authentication to the cluster using this SA is successful.
Follow the F5 XC documentation to configure K8s service discovery. This will configure discovery from a Customer Edge (CE) site, so your CE node must have network connectivity to the cluster's API server. After your services are discovered, you can create an HTTP Load Balancer with an origin pool, where the pool member type is a K8s service. Now, your pods in K8s are exposed via F5 XC, and this service discovery is using a ServiceAccount with least privilege access!
This article demonstrates that Service Discovery can be configured from F5 XC to Kubernetes based on a kubeconfig file that has been created to use a custom ServiceAccount and ClusterRole. This achieves our goals of least privilege for our Service Discovery function, and no dependency on third party libraries required by the client to access the K8s API.
Great work Michael. This has been very helpful indeed!