Kubernetes On Ubuntu 24: A Step-by-Step Installation Guide

by Team 59 views
Kubernetes on Ubuntu 24: A Step-by-Step Installation Guide

Hey guys! Ready to dive into the world of Kubernetes on Ubuntu 24? This guide will walk you through setting up a Kubernetes cluster, step by step. Kubernetes, often abbreviated as K8s, is a powerful open-source system for automating deployment, scaling, and management of containerized applications. Ubuntu 24, the latest LTS (Long Term Support) release, provides a solid and reliable foundation for running Kubernetes. Let's get started!

Prerequisites

Before we begin, make sure you have the following:

  • Multiple Ubuntu 24 Servers: You'll need at least two servers. One will act as the master node, and the others will be worker nodes. For a production environment, it's recommended to have at least three master nodes for high availability.
  • Sudo Privileges: Ensure you have sudo privileges on all the servers.
  • Internet Access: All servers should have internet access to download the necessary packages.
  • Basic Linux Knowledge: Familiarity with basic Linux commands will be helpful.

Hardware Requirements

  • Master Node: Minimum 2 CPUs, 4GB RAM
  • Worker Nodes: Minimum 1 CPU, 2GB RAM

Note: These are minimum requirements. Depending on the workload, you might need more resources.

Step 1: Update and Upgrade Packages

First, log in to each of your Ubuntu 24 servers and update the package lists and upgrade the installed packages to their latest versions. This ensures you have the latest security patches and bug fixes.

sudo apt update
sudo apt upgrade -y

Why is this important? Updating packages ensures that you're starting with a clean and secure base. It resolves potential dependency issues and minimizes the risk of encountering bugs during the installation process. Think of it as prepping your canvas before painting a masterpiece!

Step 2: Install Container Runtime (Docker)

Kubernetes needs a container runtime to run containers. Docker is a popular choice, and we'll use it in this guide. Let's install Docker on all the nodes.

Install Docker

sudo apt install docker.io -y

Start and Enable Docker

sudo systemctl start docker
sudo systemctl enable docker

Verify Docker Installation

Check the Docker version to ensure it's installed correctly.

docker --version

Configure Docker

Configure Docker to use systemd as the cgroup driver. This is required for Kubernetes to manage the container's resources effectively.

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl restart docker

Why systemd cgroup driver? Kubernetes uses cgroups (control groups) to manage resources like CPU and memory for containers. Using systemd as the cgroup driver ensures consistency with the operating system and avoids potential resource management issues. It's like making sure all the gears in a machine are perfectly aligned!

Step 3: Install Kubernetes Components

Now, let's install the Kubernetes components: kubeadm, kubelet, and kubectl. These components are essential for setting up and managing the Kubernetes cluster.

Add Kubernetes Repository

Add the Kubernetes repository to your system.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb https://apt.kubernetes.io/ kubernetes-xenial main"

Install Kubeadm, Kubelet, and Kubectl

sudo apt install kubeadm kubelet kubectl -y

Hold Package Versions

To prevent accidental upgrades, hold the package versions.

sudo apt-mark hold kubeadm kubelet kubectl

Why hold package versions? Holding package versions ensures that your Kubernetes components remain stable. Unintended upgrades can sometimes introduce compatibility issues or break your cluster. It's like locking down your tools after you've built something great!

Step 4: Initialize the Kubernetes Cluster (Master Node)

Next, initialize the Kubernetes cluster on the master node. This process sets up the control plane components, such as the API server, scheduler, and controller manager.

Initialize Kubeadm

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Important: The --pod-network-cidr specifies the IP address range for pods. Make sure it doesn't conflict with your existing network. For most setups, 10.244.0.0/16 works well.

Configure Kubectl

After initialization, configure kubectl to interact with the cluster.

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy a Pod Network

Deploy a pod network. We'll use Calico in this example. Calico is a popular choice for providing networking and network policy for Kubernetes.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Why a pod network? Kubernetes requires a pod network to enable communication between pods. Without it, pods can't talk to each other, and your applications won't work correctly. It's like building roads and bridges so cars can travel between cities!

Step 5: Join Worker Nodes to the Cluster

Now, join the worker nodes to the cluster. You'll need the kubeadm join command from the output of the kubeadm init command on the master node.

Get the Join Command

If you don't have the join command, you can generate a new one on the master node:

sudo kubeadm token create --print-join-command

Join the Worker Nodes

Run the kubeadm join command on each worker node.

sudo kubeadm join <master-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

What if the join command fails? Common issues include network connectivity problems, incorrect tokens, or certificate errors. Double-check the command, ensure the worker nodes can reach the master node, and verify the token and certificate hash.

Step 6: Verify the Cluster

After joining the worker nodes, verify that the cluster is up and running. On the master node, run:

kubectl get nodes

You should see all the nodes listed, with their status as Ready.

NAME       STATUS   ROLES           AGE     VERSION
master     Ready    control-plane   20m     v1.29.0
worker1    Ready    <none>          10m     v1.29.0
worker2    Ready    <none>          10m     v1.29.0

Check Pod Status

Check the status of the Kubernetes system pods.

kubectl get pods -n kube-system

All the core Kubernetes components should be running and healthy.

Step 7: Deploy a Sample Application

Let's deploy a sample application to test the cluster. We'll deploy a simple Nginx deployment.

Create a Deployment

kubectl create deployment nginx --image=nginx

Expose the Deployment

kubectl expose deployment nginx --port=80 --type=NodePort

Get the Service URL

kubectl get service nginx

Find the NodePort for the Nginx service and access the application in your browser using the IP address of one of the worker nodes and the NodePort.

What's a NodePort? A NodePort exposes the service on each node's IP address at a static port. This allows you to access the service from outside the cluster.

Step 8: Set Up Monitoring (Optional)

Setting up monitoring is crucial for maintaining a healthy Kubernetes cluster. Prometheus and Grafana are popular tools for monitoring Kubernetes.

Install Prometheus

kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/prometheus-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/prometheus-rules.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/prometheus-operator.yaml

Install Grafana

kubectl apply -f https://raw.githubusercontent.com/grafana/helm-charts/main/charts/grafana/crds/grafana.com_dashboards.yaml
kubectl apply -f https://raw.githubusercontent.com/grafana/helm-charts/main/charts/grafana/crds/grafana.com_datasources.yaml
kubectl apply -f https://raw.githubusercontent.com/grafana/helm-charts/main/charts/grafana/crds/grafana.com_plugins.yaml
kubectl apply -f https://raw.githubusercontent.com/grafana/helm-charts/main/charts/grafana/crds/grafana.com_secrets.yaml
kubectl apply -f https://raw.githubusercontent.com/grafana/helm-charts/main/charts/grafana/crds/grafana.com_services.yaml

Follow the instructions on the Prometheus and Grafana websites to configure them to monitor your Kubernetes cluster.

Why monitoring? Monitoring provides insights into the health and performance of your cluster and applications. It helps you identify issues early and prevent downtime.

Conclusion

Congratulations! You've successfully set up a Kubernetes cluster on Ubuntu 24. This guide covered the essential steps, from installing the necessary components to deploying a sample application. Remember to explore further and customize your setup to meet your specific needs. Kubernetes is a powerful tool, and with a little practice, you'll be managing containerized applications like a pro! Keep experimenting, and have fun exploring the world of Kubernetes!