Overview
Cloud Controller Manager
Cloud Controller Manager (CCM) is a vital component in Kubernetes that bridges your cluster with various cloud providers. By abstracting cloud-specific logic, it ensures seamless interaction between Kubernetes and cloud resources like virtual machines, load balancers, and storage.
Kubeadm
Kubeadm is an important tool in the Kubernetes ecosystem, designed to simplify the process of setting up and managing Kubernetes clusters. Here is a detailed overview of kubeadm, including its origins, purpose, and importance:
Things Required :
2 Node - t2.medium
Configure AWS Security Group & Miscellaneous Steps
Step 1: create IAM roles
IAM Policy for the master node
Create Policy
Attach Policy with IAM Role.
Attach IAM Role it to the master node
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:DescribeAvailabilityZones",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"kms:DescribeKey"
],
"Resource": [
"*"
]
}
]
}
IAM Policy for the Worker Nodes
Create Policy
Attach Policy with IAM Role.
Attach IAM Role it to the Worker node
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
Step 2: Create 2 Ec2 Instances
Instance Name: Assign a name to your instance, for example, kube-nodes.
Select Image: Choose the image Canonical, Ubuntu, 24.04 LTS.
Configure Your Kubernetes Node
Select Instance Type: Choose t2.medium as your instance type.
Key Pair: Either select an existing key pair or create a new one by clicking on “Create new key pair.”
Configure Security and Instance Settings
Open Inbound Security Group Rule: Allow port 80, 6443 and 31080 for inbound traffic from any source.
Number of Instances: Change the number of instances to 2.
Configure User Data
Copy the code snippet below and paste it into the "User data" section of your instance configuration.
#!/bin/bash
sudo swapoff -a
# Create the .conf file to load the modules at bootup
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
## Install CRIO Runtime
sudo apt-get update -y
sudo apt-get install -y software-properties-common curl apt-transport-https ca-certificates gpg
sudo curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" | sudo tee /etc/apt/sources.list.d/cri-o.list
sudo apt-get update -y
sudo apt-get install -y cri-o
sudo systemctl daemon-reload
sudo systemctl enable crio --now
sudo systemctl start crio.service
echo "CRI runtime installed successfully"
# Add Kubernetes APT repository and install required packages
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet="1.29.0-*" kubectl="1.29.0-*" kubeadm="1.29.0-*"
sudo apt-get update -y
sudo apt-get install -y jq
sudo systemctl enable --now kubelet
sudo systemctl start kubelet
Install Kubernetes Cluster (Master and Worker Nodes)
Paste Installation Script in User Data:
This script will:Disable swap
Configure network modules
Install CRIO
Install kubeadm, kubelet, and kubectl
Copy the following script and paste it into the "User data" section.
Launch the Instance:
After pasting the script, click on Launch instance to start your Kubernetes setup.
Arrange Node Names
Assign Names to Your Instances:
Master Node: Name the instance masternode1.
Worker Node 1: Name the instance workernode1.
Assign Role to master Node and worker Node
Worker Node
Launch the Instances:
Once the names are assigned, ensure that each node is correctly configured. Then, click on the instance and click on connect.
Connect Master Node (optional)
Change to Superuser:
Once connected, switch to the superuser.
sudo su
Step 3 Change Hostname
sudo hostnamectl set-hostname ip-<private-IP>.<AWS-region>.compute.internal
sudo hostnamectl set-hostname ip-172-31-10-49.ap-south-1.compute.internal
Step 4: Initialize Kubeadm Configuration (Only in Controller)
In this step, we will initialize the control plane with configurations required for the cloud controller manager.
Create configuration file kubeadm.config
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs:
- 127.0.0.1
- 3.87.212.191
extraArgs:
bind-address: "0.0.0.0"
cloud-provider: external
clusterName: kubernetes
scheduler:
extraArgs:
bind-address: "0.0.0.0"
controllerManager:
extraArgs:
bind-address: "0.0.0.0"
cloud-provider: external
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
name: ip-172-31-84-2.ec2.internal
kubeletExtraArgs:
cloud-provider: external
Change your public ip in api > certSAN’s
Change apiVersion > nodeRegistration > name
Initialize the configuration to bootstrap the kubeadm configuration
Initialization of Kubeadm (only for Master Node)
kubeadm init --config=kubeadm.config
kubeadm init --config=kubeadm.config
Set Up Kubeconfig for Regular User
- Initialize Kubeadm (Only for Master Node):
Run the following command to initialize the Kubernetes cluster on the master node.
2. Set Up Kubeconfig for the Regular User:
After initializing kubeadm, set up the kubeconfig for the ubuntu user on the master node. This will allow you to start using your cluster.
3. Verify the Configuration:
Verify that your configuration is set up correctly by running:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Check Node Status
- Verify Node Status:
After initializing kubeadm, check the status of your nodes
kubectl get nodes
The nodes may show as Not Ready. Let's fix this.
Step 5: Deploy a Pod Network
To enable communication between pods, you need to deploy a pod network. Here’s how to deploy Calico, a popular networking solution for Kubernetes:
- Deploy Calico:
Apply the Calico manifest to set up the network.
2. Verify Calico Deployment:
Check the status of Calico pods to ensure they are running correctly.
3. Verify Node Status:
After deploying Calico, verify the node status again to ensure they are Ready.
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
At the end of the initialization, a token will be generated by the controller to join the workers. This is required to create the controller configuration file.
Add Worker Nodes to the Cluster ( Run this command in only worker Nodes)
- Join Worker Nodes to the Cluster:
Run this command on each worker node (workernode1 and workernode2):
- The token and hash were provided during the kubeadm initialization on the master node.
2. Retrieve the Token (if needed):
Note : If you need to view the token again, run the following command on the master node
kubeadm token create --print-join-command
This will display the command with the token and discovery token CA cert hash, which you can then use to join the worker nodes.
**Verify Worker Nodes:
**After running the join command on the worker nodes, verify their status from the master node:
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
bootstrapToken:
token: 42bkt7.1slsu73wzgo3d7hf
apiServerEndpoint: "172.31.84.2:6443"
caCertHashes:
- "sha256:a6d3dcbc8703b18b91ae136096f435c622fba2c8427db85a6bcec9e9c2e33286"
nodeRegistration:
name: ip-172-31-95-48.ec2.internal
kubeletExtraArgs:
cloud-provider: external
Change the token apiServerEndpoint, and caCertHashes values to the intended values that your control plane generated when it initialized. name should be the full hostname of the respective node.
Modify the node registration name to your worker node’s private DNS address.
To join the workers to the controller, use the following command.
kubeadm join --config kubeadm-join-config.yaml
Modify the intended values of the configuration file and run the kubeadm join command on each worker node to join with the controller node.
Step 6 Tag Aws Resources
Tagging is essential to configure the Cloud Controller Manager because it ensures which AWS resources are used by which cluster.
For example, if a cluster uses AWS Network Load Balancer and now the cluster is being destroyed, the specified NLB also be destroyed and not affect other resources. To tag the resource, the cluster-ID is required
To find the Cluster ID, use the following command.
kubectl config view
The output would be like this.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.31.21.29:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
If the AWS resources are managed by one cluster, we can provide the tag key kubernetes.io/cluster/kubernetes and the tag value as owned and if the resources are managed by multiple clusters, the tag key would be the same as kubernetes.io/cluster/kubernetes and the tag value should be mentioned as shared.
Tags should be added to resourecs the controller and worker node consumes, such as VPC, Subnet, EC2 instance, Security Group, etc.
Step 7 : Configure the Cloud Controller Manager
Clone the AWS Cloud Controller repository to the controller plane node where you have the Kubectl access
git clone https://github.com/kubernetes/cloud-provider-aws.git
Navigate to the base directory. It has all the kubernetes manifests for the cloud controller manager and the Kustomize file.
cd cloud-provider-aws/examples/existing-cluster/base
Create the daemonset using the following command. -k is for Kustomize.
kubectl create -k .
To verify the daemonset is running properly, use the following command.
kubectl get daemonset -n kube-system
To ensure the CCM pod is running, use the following command.
kubectl get pods -n kube-system