From 7c5bbcdc0008f83a46f10e62e1d963e8e5403605 Mon Sep 17 00:00:00 2001 From: "marcoemi.poleggi" <marco-emilio.poleggi@hesge.ch> Date: Mon, 14 Oct 2024 12:24:01 +0200 Subject: [PATCH] Doc revision --- README.md | 51 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 30 insertions(+), 21 deletions(-) diff --git a/README.md b/README.md index 01b1509..5964574 100644 --- a/README.md +++ b/README.md @@ -1,23 +1,33 @@ # lab-k8s +Practice Kubernetes in a single host environment via Kind. + ## Objective -This exercise will guide you through the process of deploying and managing a Kubernetes cluster using Kind (Kubernetes IN Docker) on a beefy SE (Standard Edition) instance. You will: +This exercise will guide you through the process of provisioning and managing a Kubernetes cluster using Kind (Kubernetes IN Docker) on an OpenStack / Switch Engines (SE) instance. In a IaaS perspective, your _infrastructure_ is the host where Kind is installed. + +You will: -1. Install Kind on your instance. -2. Create a Kind cluster with the base configuration. +1. [Install Kind](https://kind.sigs.k8s.io/docs/user/quick-start#installation) on your instance. +2. Provision a [Kind cluster](https://kind.sigs.k8s.io/docs/user/quick-start#creating-a-cluster) with the base image. 3. Interact with the cluster to understand its components. -4. Modify the cluster configuration to add worker nodes. -5. Redeploy and verify the new cluster setup. -6. Deploy a service with a load balancer and test it. -7. Clean up resources and snapshot the instance. +4. Modify the cluster [configuration](https://kind.sigs.k8s.io/docs/user/configuration/) to add worker nodes. +5. Reprovsion the clustare and verify the new setup. +6. Deploy a microservice with a load balancer and test it. +7. Tear down the cluster and snapshot the instance. + +## Prerequesites -## Prerequesites: -- **Switch Linux Instance**: Ensure you have access to a beefy Switch instance with (at least a c1.large [4 cpus, 4GB RAM]) - Kind uses quite a few resources, after all we are simulating a full-fledged cluster. For the purposes of this exercise, choose Ubuntu 22.04 as the image. +- Ensure you have access to a beefy Switch Engines Linux instance. A c1.large instance with at least a [4 cpus, 4GB RAM]) should be OK. + +- Kind uses quite a few resources, after all we are simulating a full-fledged cluster. For the purposes of this exercise, choose Ubuntu 22.04 as the image. ## Part 1: Installing Kind +Kind uses Docker as a container runtime. + ### 1. Installing Docker + If Docker is not already installed on your instance, install it using the following commands: 1. Set up Docker's apt repostiory: @@ -58,10 +68,9 @@ docker run hello-world ### 2. Installing Kind -To install kind, run: +To install kind (assuming an AMD64 / x86_64 system), run: ```bash -# For AMD64 / x86_64 [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind @@ -75,13 +84,13 @@ kind version Good, you are ready to create a cluster. -## Part 2: Creating a Kind cluster with the Base Image +## Part 2: Provision a Kind cluster with the base image There are two methods to create a cluster: 1. Manually, by using the `kind create cluster` method, which will create a single node cluster -2. Using a `kind-config.yaml` configuration file, and running `kind create cluster --config kind-config.yaml`. In this case, we can specifcy the number of nodes and the type of nodes. +2. Using a `kind-config.yaml` configuration file, and running `kind create cluster --config kind-config.yaml`. In this case, we can specify the number and type of nodes. -Using [this webpage](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster) create a configuration to deploy a cluster with one `control-plane` node, and up to 10 worker nodes, you can choose the number of worker nodes, but less than 10. +Refer to [this webpage](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster) and create a configuration to deploy a cluster with one `control-plane` node and several worker nodes. You can choose the number of worker nodes, but please stay <= 10. ## Part 3: Interacting with the Cluster @@ -107,7 +116,7 @@ kubectl get nodes ### 3. Check the Cluster -A k8s cluster is much more than its nodes. Check all the moving parts of your cluster. P.S: it should be empty for now, but use this command later to verify if your deployment is successful: +A K8s cluster is much more than its nodes. Check all the moving parts of your cluster. It should be empty for now, but use this command later to verify if your deployment is successful: ```bash kubectl get all @@ -115,9 +124,9 @@ kubectl get all ## Part 4: Modifying the Cluster Configuration -You already have a configuration file. Make some alteration to the file (remove worker nodes, add worker nodes | DO NOT REMOVE THE CONTROL PLANE NODE!!!), and try to launch the cluster again. +You already have a configuration file. Make some alteration to the file: add/remove worker nodes, but, of course, **do not remove the control plane node!**. Then, launch the cluster again. -To re-deploy, first delete the original cluster (hint: check kind's man-page), and then redeploy using the updated configuration file. +To re-provision, first delete the original cluster (hint: check kind's man-page), and then recreate it using the updated configuration file. - Has the number of nodes changed? @@ -125,7 +134,7 @@ To re-deploy, first delete the original cluster (hint: check kind's man-page), a ### 1. Installing a Load Balancer -While Kind allows us to test out a K8S cluster, it doesn't have all the bells. One of the things it lacks is a Load Balancer. Luckly, K8S is easily extendable, so, let's install one. +While Kind allows us to test out a K8s cluster, it doesn't have all the whistels and bells. One of the things it lacks is a Load Balancer. Luckly, K8s is easily extendable, so, let's install one: we'll use [MetalLB](https://metallb.universe.tf/). To install the MetalLB load balancer: @@ -234,9 +243,9 @@ Run your code, what do you see? (PS: it will take some time to show both of the ## Part 6: Destroying the Cluster -Destroy you cluster +Destroy you cluster. -## Part 7: Destroying the VM +## Part 7: Cleaning up -Now, destroy the VM. +Snapshot your VM for further use and terminate it. -- GitLab