lab-k8s
Practice Kubernetes in a single-host environment via KinD which uses Docker as a container runtime.
Objectives
This exercise will guide you through the process of provisioning and managing a Kubernetes cluster using Kind (Kubernetes in Docker) on an OpenStack / Switch Engines (SE) instance. In a IaaS perspective, your infrastructure is the host where KinD is installed.
Tasks:
- Install KinD on your instance.
- Provision a KinD cluster with the base image.
- Interact with the cluster to understand its components.
- Modify the cluster configuration to add worker nodes.
- Reprovision the cluster and verify the new setup.
- Deploy a microservice with a load balancer and test it.
- Tear down the cluster and snapshot the instance.
Prerequesites
-
Ensure you have access to a beefy Switch Engines Linux instance. A
c1.large
instance with at least a [4 cpus, 4GB RAM]) should be OK. -
KinD uses quite a few resources, after all we are simulating a full-fledged cluster. For the purposes of this exercise, choose Ubuntu 22.04 as the image.
Part 1: Installing Kind on your VM
Log in to your VM via SSH.
1. Installing Docker
If Docker is not already installed on your instance, install it using the following commands:
- Set up Docker's apt repository:
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
- Install Docker packages:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- Postinstall Steps: By default, Docker doens't work for non-root users. But running it as root is dangerous, so, run the following steps:
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
Afterwards, check your Docker installation by running:
docker run hello-world
2. Installing Kind
Assuming an AMD64 / x86_64 system, run:
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Then, verify the installation:
kind version
Good, you are ready to create a cluster.
Part 2: Provision a Kind cluster with the base image
The command kind create cluster
provisions by default a single-node cluster. To specify more nodes and the roles assigned to them, you shall
- Write a
kind-config.yaml
configuration file following the advanced method, which specifies acontrol-plane
node and a coupleworker
nodes - Run
kind create cluster --config kind-config.yaml
To confirm that the cluster was correctly provisioned, run:
kind get clusters
kind get nodes
Part 3: Interacting with the Cluster
1. Installing Kubectl
To interact with your cluster, let's first install the official K8s CLI tool kubectl
:
# Download the binary
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# Install kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Test kubectl
kubectl version --client
2. Check the Nodes
Run
kubectl get nodes -o wide
- How many nodes are deployed?
- Are they all working? Try to ping them
- What's the cluster's overlay IP network?
- Compare with the output of the command
ip addr
: what kind of host-level network is the overlay? - Are there any pods running?
2. Check the Cluster
A K8s cluster is much more than its nodes. Check all the moving parts of your cluster. It should be empty for now, but use this command later to verify if your deployment is successful:
kubectl get all
Part 4: Modifying the Cluster Configuration
You already have a configuration file. Make some alteration to the file: add/remove worker nodes, but, of course, do not remove the control plane node!. Then, launch the cluster again.
To re-provision, first delete the original cluster (hint: check kind's man-page), and then recreate it using the updated configuration file.
- Has the number of nodes changed?
Part 5: Actually deploying an application
1. Installing a Load Balancer
While Kind allows us to test out a K8s cluster, it doesn't have all the whistels and bells. One of the things it lacks is a Load Balancer. Luckly, K8s is easily extendable, so, let's install one: we'll use MetalLB.
To install the MetalLB load balancer:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml
Then, in the VM, create a config file metallb.yaml
:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 172.18.255.1-172.18.255.250 # Adjust this range based on your Docker network
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
Then, we need to apply this configuration to the cluster:
kubectl apply -f metallb.yaml
Good, now we can create deployment and service file, which in this case, we'll have in one large deployment file:
2. Deployment and Service File
Finally! Here is the little app we are going to deploy. Create a YAML deployment file lb-deployment.yaml
in your VM with the following content:
# deployment-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-echo
spec:
replicas: 2
selector:
matchLabels:
app: http-echo
template:
metadata:
labels:
app: http-echo
spec:
nodeSelector:
kubernetes.io/hostname: kind-worker # Schedule pods on one worker node
containers:
- name: http-echo
image: hashicorp/http-echo
args:
- >-
-text=Hello from Kubernetes! My IP is $(POD_IP)
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
selector:
app: http-echo
ports:
- port: 80
targetPort: 5678
This deployment file will deploy 2 replicas of the HTTP-Echo container, which receive an environmental variable as input with the Pod's IP address. When the replica is reached, the program running inside of the container will return Hello from Kubernetes! My IP is $(POD_IP)
.
The service is of type LoadBalancer
, and looks for pods with the app: http-echo
label.
To deploy:
kubectl apply -f lb-deployment.yaml
3. I deployed, now what?
You deployed, now what? Well, now you are going to do a bash program to constantly curl
the load balancer.
First, check the External IP of the load balancer:
kubectl get service http-echo-service
Then, write a shell script that sends some (at least 10) HTTP requests in a loop via curl
.
Run your script: it should show HTTP reponses from two different IP addresses. It might take some time to show output from both instances, as metallb is not a round-robin-style load balancer.
Now, compare the source IPs of the reponses with the loadbalancer's public IP. Why the responses come from a network different than the loadbalancer's?
Then, run the following command
kubectl get pods -o wide
Part 6: Destroying the Cluster
Destroy you cluster.
Part 7: Cleaning up
Snapshot your VM for further use and terminate it.