Customers are moving to cloud thanks to its scalability and cost-effectiveness. Some are adopting Kubernetes to modernize their workloads for a more agile way to iterate and deliver time-to-market services or product features to their customers. As the number of clusters to be managed increases, customers struggle with different deployment mechanisms and application programming interfaces (APIs) for managing Kubernetes clusters. They’re looking for a more centralized approach to create and manage clusters running in different formats, instead of using separate tools to manage each. In this post, we’ll show you how to pair Cluster API and Argo CD to streamline deployment and operation of multiple Kubernetes clusters.
What is Cluster API?
Cluster API is a Kubernetes project that enables declarative management for Kubernetes, using APIs to easily create, configure, and update clusters. In October 2021, the Cloud Native Computing Foundation (CNCF) announced Cluster API v1.0 is production-ready, its growing adoption, feature maturity, and a strong commitment to community and inclusive innovation.
Cluster API Provider for aws (CAPA), maintained by an opensource community, is a SIG Cluster Lifecycle project. You can deploy and upgrade Kubernetes clusters by handling infrastructure provisioning in AWS. CAPA enables provisioning of Amazon EC2 and Amazon EKS based Kubernetes clusters. In this post, we’ll cover both to simulate a multi-cluster scenario where customer needs to operate Kubernetes cluster either running on a self-managed cluster backed by Amazon EC2 or the other on Amazon EKS. Amazon EKS Anywhere uses Cluster API provider for vSphere or bare metal for creating, upgrading, and managing Kubernetes clusters.
For Amazon EKS, CAPA has the following features:
- Provisioning and managing an Amazon EKS Cluster
- Upgrading the Kubernetes version of the Amazon EKS Cluster
- Attaching a self-managed machines as nodes to the Amazon EKS cluster
- Creating a machine pool and attaching it to the Amazon EKS cluster. See machine pool docs for details.
- Creating a managed machine pool and attaching it to the Amazon EKS cluster. See machine pool docs for details
- Managing Amazon “EKS Addons”. See addons for further details
- Creating an Amazon EKS AWS Fargate profile
- Managing an AWS Identity and Access management (AWS IAM) aws-iam-authenticator configuration
What is Argo CD
Argo CD is a declarative, GitOps continuous-delivery tool for Kubernetes. It follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. With Argo CD, you automate the deployment of the application as defined in Git repository to your target cluster environments.
Solution overview
How it works – High-level orchestration flow
In this walk through, we use Kind to create a management cluster in the local docker machine.
(Note: kind isn’t designed for production use. It’s used here for demonstration purpose. For production, it’s recommended to use a dedicated Kubernetes cluster, such as Amazon EKS, with appropriate backup and DR policies and procedures in place. The Kubernetes cluster must be at least v1.20.0.).
We initialize the management cluster with Cluster API CLI. The whole cluster and application deployment process is managed by the ArgoCD installed on the Kind management cluster. Argo CD supports multi-clusters and allows you to define an application to link up the Kind cluster and the to-be-created workload clusters (running on Amazon EC2 and Amazon EKS) to your Git repositories where workload configuration is stored. After the creation of workload clusters, we enroll them to Argo CD as a managed cluster to further deploy a sample application to them.
Setup prerequisites
- An understanding of Amazon EKS, Argo CD, and Kubernetes
- Complete installation of kubectl, kind, clusterctl, clusterawsadm, argocd cli, aws cli, jq, and docker
- An AWS account and a profile with administrator permissions should be configured
You may refer to the following links for installing the necessary CLI toolings:
Walkthrough
Fork the Git repository and clone it to your local workstation
This post provides sample manifest file for setting up the environment. To access them, you must create a fork of this GitHub repository and clone.
git clone https://github.com/aws-samples/eks-ec2-clusterapi-gitops
Change to the cloned directory:
cd ./eks-ec2-clusterapi-gitops
Initialize the management cluster
Kind is used for creating a local Kubernetes cluster for the creation of a temporary bootstrap cluster used to provision a target management cluster on the Cluster API provider for AWS.
Now, let’s create the kind cluster and verify the success of the creation:
kind create cluster kubectl cluster-info --context kind-kind
You should be able to see the following when the Kind cluster is created successfully.
CoreDNS is running at https://127.0.0.1:60624/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Kubernetes control plane is running at https://127.0.0.1:60624
The Kind Kubernetes cluster is transformed into a management cluster by installing the Cluster API provider components. It’s recommended to separate the Kind Kubernetes cluster from any application workload.
Cluster API Provider for AWS ships with clusterawsadm
, a utility to help you manage IAM objects for this project. The clusterawsadm
binary uses your environment variables and encodes them in a value to be stored in a Kubernetes Secret of the Kind cluster to fetch necessary permissions to create the workload clusters.
clusterawsadm bootstrap iam create-cloudformation-stack --region us-east-1
Now, let’s predefine the necessary environment parameters:
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile) export AWS_REGION=us-east-1
Since we’ll use Cluster API to help us launch an EKS cluster to, we need to enable the following necessary feature gates:
export EKS=true export EXP_MACHINE_POOL=true (For using managed node group, this should be enabled) export CAPA_EKS_IAM=true
To install the Cluster API components for AWS, we’ll use the kubeadm bootstrap provider, and the kubeadm
control-plane provider, and transform the Kind cluster into a management cluster with the clusterctl init
command.
clusterctl init --infrastructure aws
If you see this output, then it means that you have successfully initialized the Kind cluster into a management cluster.
Your management cluster has been initialized successfully! Fetching providers Installing cert-manager Version="v1.9.1" Installing Provider="cluster-api" Version="v1.2.2" TargetNamespace="capi-system" ... Installing Provider="infrastructure-aws" Version="v1.5.0" TargetNamespace="capa-system" ...
Generate the Kubernetes Cluster manifests for Amazon EKS and Amazon EC2
In this step, we provision one Kubernetes cluster on Amazon EC2 and one Amazon EKS cluster, in us-east-1
and ap-southeast-2
Regions, respectively. Since we need to provision the workload clusters via Argo CD, the first step is to prepare the manifest files for each cluster. One quick way to do this is to use clusterctl generate cluster
and the command line to capture the environment variables you defined to formulate the manifests.
The following is a list of environment variables you should define:
export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large export AWS_NODE_MACHINE_TYPE=t3.large export AWS_SSH_KEY_NAME=capi-ec2 (Given we are deploying the EC2 cluster and EKS cluster in different regions, this should be adjusted accordingly ) export AWS_REGION=us-east-1 (This should be adjusted based on where you would like to create your clusters)
Since we haven’t created the secure shell (SSH) key for remote accessing the worker nodes of the clusters, let’s create them:
aws ec2 create-key-pair --key-name capi-eks --region ap-southeast-2 --query 'KeyMaterial' --output text > capi-eks.pem aws ec2 create-key-pair --key-name capi-ec2 --region us-east-1 --query 'KeyMaterial' --output text > capi-ec2.pem
Now, we can create our workload cluster manifests by running the following command. (If you cloned the Git repository, the following manifests should exist under the capi-cluster folder).
For K8S cluster running on Amazon EC2, if you want to generate a new cluster template instead of using the one provided, then run the following command:
clusterctl generate cluster capi-ec2 --kubernetes-version v1.24.0 --control-plane-machine-count=3 --worker-machine-count=3 > ./capi-cluster/aws-ec2/aws-ec2.yaml
For Amazon EKS, if you want to generate a new cluster template instead of using the one provided, then run the following command: (the eks-managedmachinepool dictates that managed node group should be used when creating the work nodes for Amazon EKS cluster):
clusterctl generate cluster capi-eks --flavor eks-managedmachinepool --kubernetes-version v1.22.6 --worker-machine-count=2 > ./capi-cluster/aws-eks/capi-eks.yaml
Install Argo CD on the management Kubernetes cluster
According to Cluster API documentation, one way to deploy a new Amazon EKS/K8S cluster is to run kubectl apply to apply the generated manifests into the Kind management cluster. However, we’ll use Argo CD to streamline the workload cluster deployment based on the generated manifest stored in Git repository.
First, let’s install Argo CD in the Kind management cluster:
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Now, let’s log in to Argo CD’s web portal by using port-forwarding:
kubectl port-forward svc/argocd-server 8080:80
Log in to Argo CD to verify the installation is correctly performed:
Create application in Argo CD to deploy a new Amazon EKS cluster and K8S cluster that runs on Amazon EC2
Argo CD provides an Application CRD that maps the configuration code of a given Git repository to a Kubernetes namespace. Before we deploy the Argo CD application in the management cluster, we first need to ensure that Argo CD can be authenticated to Git. To do this, log in Argo CD’s web UI, navigate to the Setting gear on menu bar on the left hand side, then choose CONNECT REPO USING HTTPS. Fill in the proper settings to connect to your Git repo.
https://github.com/aws-samples/eks-ec2-clusterapi-gitops (Your should replace it with your forked git repo URL)
For Argo CD to be authorized to manage Cluster API objects, we create a ClusterRoleBinding
to associate necessary permissions to the service account assumed by Argo CD. For simplicity, let’s add the cluster-admin
role to the argocd-application-controller
ServiceAccount used by Argo CD.
Kubectl apply -f ./management/argo-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-admin-argocd-contoller subjects: - kind: ServiceAccount name: argocd-application-controller namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin
To create a new Amazon EKS cluster or a K8S cluster that runs on Amazon EC2, we define an application in a declarative approach. This represents of a collection of Kubernetes manifests that makes up all the pieces to deploy the new cluster. In this guide, the configurations of the Argo CD application to be added are stored in the Management
folder of the cloned repository. For example, the following is the application manifest file for creating a new K8S cluster that runs on Amazon EC2:
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: ec2-cluster spec: destination: name: '' namespace: 'default' server: 'https://kubernetes.default.svc' source: path: capi-cluster/aws-ec2 repoURL: '[update the Git Repo accordingly]' #Indicate which source repo for fetching the cluster configuration targetRevision: HEAD project: default #You can give a project name here syncPolicy: automated: prune: true allowEmpty: true
After modifying the Argo CD application yaml
file as above, commit and push the updates to the source repository:
git push . git add . git commit -m “updates Argo CD app manifest file”
Run the following command to create a new Application to create the K8S cluster that runs on Amazon EC2:
kubectl apply -f ./management/argocd-ec2-app.yaml
Similarly, we’ll use Argo CD application to help us spin up the Amazon EKS cluster. An application was created for this example. Remember to modify the corresponding yaml file to ensure repoURL points to your own Git repository, then commit and push the update to the source repository.
Apply the Argo CD application to spin up the Amazon EKS cluster:
kubectl apply -f ./management/argocd-eks-app.yaml
As you deploy the two applications in Argo CD, you should be able to see them created in Argo CD’s web UI, as shown in the following diagram:
Now, we list all the clusters being provisioned:
$ kubectl get clusters NAME PHASE AGE VERSION capi-ec2 Provisioning 70s capi-eks Provisioning 39s
Check status of the clusters:
clusterctl describe cluster capi-eks clusterctl describe cluster capi-ec2
└─3 Machines... True 6m14 NAME READY SEVERITY REASON SINCE Cluster/capi-ec2 True 2m53s ├─ClusterInfrastructure - AWSCluster/capi-ec2 True 9m20s ├─ControlPlane - KubeadmControlPlane/capi-ec2-control-plane True 2m53s │ └─3 Machines... True 6m2s └─Workers └─MachineDeployment/capi-ec2-md-0 False Warning 12m
For the cluster running on Amazon EC2, you should notice that READY
status of worker nodes remains False
. Since there’s no CNI in place in the cluster, the worker nodes haven’t joined to the cluster successfully. While you could automate the CNI deployment process via Argo CD, let’s deploy the Calico CNI into the Amazon EC2 cluster for demonstration purpose.
Fetch the kubeconfig file for the newly created cluster on Amazon EC2:
clusterctl get kubeconfig capi-ec2 > capikubeconfig-ec2
Deploy Calico CNI:
kubectl --kubeconfig=./capikubeconfig-ec2 apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml
As Calico CNI is successfully deployed, check the cluster status again.
└─3 Machines... True 16m NAME READY SEVERITY REASON SINCE Cluster/capi-ec2 True 13m ├─ClusterInfrastructure - AWSCluster/capi-ec2 True 20m ├─ControlPlane - KubeadmControlPlane/capi-ec2-control-plane True 13m │ └─3 Machines... True 16m └─Workers └─MachineDeployment/capi-ec2-md-0 True 1s
Navigate to the Amazon EC2 service in AWS Management Console and select us-east-1 region
. We see that all the control plane and worker nodes have spun up running.
Let’s also verify the Amazon EKS cluster has been deployed successfully:
└─ControlPlane-AWSManagedControlPlane/capi-eks-control-plane True 3m22s clusterctl describe cluster capi-eks NAME READY SEVERITY REASON SINCE MESSAGE Cluster/capi-eks True 3m22s
Navigate to the Amazon EKS service in AWS Management Console, select ap-southeast-2
region and we should see the cluster created.
When creating an Amazon EKS cluster there are kubeconfigs
generated and stored as secrets in the management cluster. This is different to when you create a non-managed cluster using the AWS provider. The name of the secret that contains the kubeconfig
is [cluster-name]-user-kubeconfig
, where you need to replace [cluster-name] with the name of your cluster.
To get the user kubeconfig
for a cluster named capi-eks
you can run a command similar to:
kubectl --namespace=default get secret capi-eks-user-kubeconfig -o jsonpath={.data.value} | base64 --decode > capikubeconfig-eks
List all available worker nodes of the newly created Amazon EKS cluster:
ip-10-0-247-161.ap-southeast-2.compute.internal Ready42m v1.22.6-eks-7d68063 kubectl --kubeconfig=./capikubeconfig-eks get no NAME STATUS ROLES AGE VERSION ip-10-0-191-240.ap-southeast-2.compute.internal Ready 42m v1.22.6-eks-7d68063
Deploy a sample application into the newly created clusters
For us to be able to deploy a sample application into the workload clusters via Argo CD, we need to add the Amazon EC2/Amazon EKS cluster to Argo CD as a managed cluster. For demonstration purpose, I’ll only show the steps for deploying a sample Nginx
application onto the Amazon EKS cluster. The steps for deploying to Amazon EC2 cluster should be quite similar. First, you need to enroll the Amazon EKS cluster as a managed cluster in Argo CD.
Log in to ArgoCD instance (with port forwarding enabled). We use argocd
CLI to authenticate against Argo CD server:
argocd login localhost:8080 (when asked if to proceed with insecure connection, say yes.)
Get the context name of the newly created Amazon EKS cluster:
kubectl config get-contexts --kubeconfig=./capikubeconfig-eks CURRENT NAME CLUSTER AUTHINFO * default_capi-eks-control-plane-user@default_capi-eks-control-plane default_capi-eks-control-plane default_capi-eks-control-plane-user
Run the following command to add the newly created Amazon EKS cluster as a managed cluster in Argo CD:
argocd cluster add default_capi-eks-control-plane-user@default_capi-eks-control-plane --server localhost:8080 --insecure --kubeconfig capikubeconfig-eks
Say yes
with the following warning:
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `default_capi-eks-control-plane-user@default_capi-eks-control-plane` with full cluster level admin privileges. Do you want to continue [y/N]?
aws eks describe-cluster --name default_capi-eks-control-plane --region ap-southeast-2 | jq '.cluster.endpoint' apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: eks-nginx spec: destination: name: '' namespace: '' server: '[Replace with your EKS API Endpoint URL]' #Replace the API endpint URL of your EKS cluster source: path: workload/aws-eks repoURL: '[Replace with your own Git Repo URL]' #Indicate which source repo for fetching the cluster configuration targetRevision: HEAD project: default #You can give a project name here syncPolicy: automated: prune: true allowEmpty: true
Next, we should create a new Application in Argo CD to allow it to streamline the sample application deployment to the Amazon EKS cluster. Before applying the following command, modify the manifest file above to ensure repoURL
points to your Git repository, then commit and push the updates.
Apply the following command to deploy the sample nginx
application:
kubectl apply -f ./app/eks-app/eks-nginx-deploy.yaml
In Argo CD’s Web portal, you see that the sample nginx
app has been deployed into the Amazon EKS cluster successfully.
Cleaning up
To make sure there’s no unwanted cloud cost, please clean up the environment.
Delete the Argo CD applications:
kubectl delete -f ./management/argocd-ec2-app.yaml kubectl delete -f ./management/argocd-eks-app.yaml kubectl delete -f ./app/eks-app/eks-nginx-deploy.yaml
Delete the created clusters:
kubectl delete cluster capi-eks kubectl delete cluster capi-ec2
Delete the created Amazon EC2 SSH key in each Region:
aws ec2 delete-key-pair --key-name capi-eks --region ap-southeast-2 aws ec2 delete-key-pair --key-name capi-ec2 --region us-east-1
Delete the CloudFormation Stack when running clusterawsadm bootstrap command to create all necessary IAM resources:
aws cloudformation delete-stack --stack-name cluster-api-provider-aws-sigs-k8s-io --region us-east-1
Delete management cluster:
kind delete cluster
Conclusion
In this post, I introduced what Cluster API is and explained why you can use this useful tool for managing multiple Kubernetes clusters instead of struggling with different APIs and tool sets to maintain them. By pairing with another GitOps tool, called Argo CD, you can take Git as the single source of truth to store all your cluster and workload manifest files and let Argo CD continuously deliver new workloads or workload configuration updates to the target environment.