Kubernetes API - Securing the API

Securing the API

In this first part, you will create and authenticate a new user to the cluster and authorize the user to list pods.

Create a private key for human-user
cd $HOME
mkdir securing-api
cd securing-api
openssl genrsa -out human-user.key 2048
Create a certificate signing request (CSR). CN is the username and submit it to a Kubernetes Cluster

For reference, https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/

openssl req -new -key human-user.key -subj "/CN=human-user" -out human-user.csr 
Create csr.yaml
cat <<EOF > csr.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: human-user
spec:
  request: <request>
  signerName: kubernetes.io/kube-apiserver-client
  expirationSeconds: 86400  # one day
  usages:
  - client auth
EOF
Edit csr.yaml. Replace <request> with the base64 encoded value of the CSR. You can get the content using this command :
cat human-user.csr | base64 | tr -d "\n"
Apply the crs.yaml manifest
kubectl apply -f csr.yaml 
Review CSR
kubectl get csr
Approve CSR
kubectl certificate approve human-user
The signed certificate value is in Base64-encoded under status.certificate Save it to a file called human-user.crt
kubectl get csr human-user -o jsonpath='{.status.certificate}'| base64 -d > human-user.crt
To connect to the cluster we need to store the contents of certificate-authority in a file, ca.crt
kubectl config view --minify --raw --output 'jsonpath={..cluster.certificate-authority-data}' | base64 -d > ca.crt
We also need to know the Kubernetes server name
kubectl cluster-info

The output may look like the example here :

Kubernetes control plane is running at https://api.exedb-sno-pool-wpljs.opdev.io:6443
kubernetes-server is https://api.exedb-sno-pool-wpljs.opdev.io:6443 
Now it’s time to connect to the cluster as human-user
kubectl --server <kubernetes-server> --certificate-authority ca.crt --client-certificate human-user.crt --client-key human-user.key get pods

Note: Get the <kubernetes_server> from previous step

You will see some errors that’s because the client certitiface and the client key for admin trumps what you have entered. To solve this problem, temporary rename config file
mv ~/.kube/config ~/.kube/config-notuse
Again, connect to cluster as human-user user
kubectl --server <kubernetes-server> --certificate-authority ca.crt --client-certificate human-user.crt --client-key human-user.key get pods
Revert back to config so we can create the human-user inside Kubernetes
mv ~/.kube/config-notuse ~/.kube/config
Create the human-user inside Kubernetes
kubectl config set-credentials human-user --client-key=human-user.key --client-certificate=human-user.crt --embed-certs=true
Create a context for the human-user
kubectl config set-context human-user-context --cluster=<cluster-nickname> --user=human-user

Note: You can get <cluster-nickname> by executing this command, kubectl config view --minify -o jsonpath='{.clusters[].name}'

Change the context to human-user-context
kubectl config use-context human-user-context
Using the human-user to get pod
kubectl get pod
Change the context back to admin
kubectl config use-context admin
Create cluster role
kubectl create clusterrole cr --verb=get,list,watch --resource=pods,pods/status
Create cluster role binding
kubectl create clusterrolebinding crb --clusterrole=cr --user=human-user
Check user permissions as human-user
kubectl auth can-i get pod --as human-user
Check user permissions as admin
kubectl auth can-i get pod --as admin

In the second part, you will create and authenticate a new service account in the cluster and authorize the service account to list pods.

Create Service Account
kubectl create serviceaccount mysa

Note: Make sure you’re using the admin context

Set credentials, set-context
kubectl config set-credentials mysa --token=$(kubectl get secret <secret_name> -o jsonpath={.data.token} | base64 -d)

Note: In the command, <secret_name> is the name of your service account secret. How do you get the service account’s secret ?

kubectl config set-context sa-context --user=mysa --cluster="cluster-nickname"
Change the context to use the serviceaccount named mysa
kubectl config use-context sa-context
List pods as mysa
kubectl get pod
Change the context to use admin
kubectl config use-context admin
Create clusterrole myrole
kubectl create clusterrole myrole --verb=get,list,watch --resource=pods,pods/status
Create clusterrolebinding to bind myrole to mysa
kubectl create clusterrolebinding mybinding \
--clusterrole=myrole \
--serviceaccount=default:mysa
Switch to sa-context
kubectl config use-context sa-context
Check user permissions as mysa
kubectl get pod