Execute Scheduled kubectl Command within a Kubernetes Cluster

DevOps Engineer | Kubernetes | Python | Terraform | AWS | GCP
Sometimes one may need to perform periodic activities (restarting pods, cleaning up volumes, updating replica counts, etc.). For self-hosted clusters, it is possible to schedule kubectl commands as cron jobs on the master nodes. But what about EKS/GKE/AKS clusters? You wouldn't want to create a server solely for this purpose. Additionally, there are numerous overheads, such as creating IAM roles and installing kubectl, among others.
Well, here is the trick. You can do the following.
Create a Kubernetes-native
CronJobusing any Docker image that includeskubectlRun the pod with a service account that has the
Role/ClusterRoleto perform the desired actions
First, choose a namespace where the CronJob will be created and executed. Then, create a service account in that namespace.
kubectl create ns my-namespace
kubectl create sa my-cronjob -n my-namespace
Next, associate some RBAC rules with the service account. You can use system roles (like cluster-admin or view) but try not to allow excessive permission.
kubectl create clusterrolebinding --clusterrole edit \
--serviceaccount=my-namespace:my-cronjob
Finally, create a CronJob definition that utilizes the previously mentioned ServiceAccount. Make sure that the spec.schedule is accurate in the UTC timezone. Additionally, you should include an exit 0 at the end of the command; otherwise, Kubernetes will regard the job as failed.
Here's an example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cronjob
namespace: my-namespace
spec:
schedule: "0 0 * * SUN"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- command:
- /bin/bash
- -c
- |
kubectl -n prod rollout restart deploy
kubectl -n stage rollout restart deploy
kubectl -n dev rollout restart deploy
exit 0
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
name: job
serviceAccount: my-cronjob




