diff --git a/README.md b/README.md index 5964574a19990f08717b3414defd936ab5b9d80c..8e41600556cafe0434c912e3d5e500a8976da837 100644 --- a/README.md +++ b/README.md @@ -1,36 +1,36 @@ # lab-k8s -Practice Kubernetes in a single host environment via Kind. +Practice Kubernetes in a single-host environment via KinD which uses Docker as a container runtime. -## Objective +## Objectives -This exercise will guide you through the process of provisioning and managing a Kubernetes cluster using Kind (Kubernetes IN Docker) on an OpenStack / Switch Engines (SE) instance. In a IaaS perspective, your _infrastructure_ is the host where Kind is installed. +This exercise will guide you through the process of provisioning and managing a Kubernetes cluster using Kind (Kubernetes in Docker) on an OpenStack / Switch Engines (SE) instance. In a IaaS perspective, your _infrastructure_ is the host where KinD is installed. -You will: +Tasks: -1. [Install Kind](https://kind.sigs.k8s.io/docs/user/quick-start#installation) on your instance. -2. Provision a [Kind cluster](https://kind.sigs.k8s.io/docs/user/quick-start#creating-a-cluster) with the base image. +1. [Install KinD](https://kind.sigs.k8s.io/docs/user/quick-start#installation) on your instance. +2. Provision a [KinD cluster](https://kind.sigs.k8s.io/docs/user/quick-start#creating-a-cluster) with the base image. 3. Interact with the cluster to understand its components. 4. Modify the cluster [configuration](https://kind.sigs.k8s.io/docs/user/configuration/) to add worker nodes. -5. Reprovsion the clustare and verify the new setup. +5. Reprovision the cluster and verify the new setup. 6. Deploy a microservice with a load balancer and test it. 7. Tear down the cluster and snapshot the instance. ## Prerequesites -- Ensure you have access to a beefy Switch Engines Linux instance. A c1.large instance with at least a [4 cpus, 4GB RAM]) should be OK. +- Ensure you have access to a beefy Switch Engines Linux instance. A `c1.large` instance with at least a [4 cpus, 4GB RAM]) should be OK. -- Kind uses quite a few resources, after all we are simulating a full-fledged cluster. For the purposes of this exercise, choose Ubuntu 22.04 as the image. +- KinD uses quite a few resources, after all we are simulating a full-fledged cluster. For the purposes of this exercise, choose Ubuntu 22.04 as the image. -## Part 1: Installing Kind +## Part 1: Installing Kind on your VM -Kind uses Docker as a container runtime. +Log in to your VM via SSH. ### 1. Installing Docker If Docker is not already installed on your instance, install it using the following commands: -1. Set up Docker's apt repostiory: +1. Set up Docker's apt repository: ```bash # Add Docker's official GPG key: sudo apt-get update @@ -68,7 +68,7 @@ docker run hello-world ### 2. Installing Kind -To install kind (assuming an AMD64 / x86_64 system), run: +Assuming an AMD64 / x86_64 system, run: ```bash [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64 @@ -86,16 +86,22 @@ Good, you are ready to create a cluster. ## Part 2: Provision a Kind cluster with the base image -There are two methods to create a cluster: -1. Manually, by using the `kind create cluster` method, which will create a single node cluster -2. Using a `kind-config.yaml` configuration file, and running `kind create cluster --config kind-config.yaml`. In this case, we can specify the number and type of nodes. +The command `kind create cluster` provisions by default a [single-node cluster](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster). To specify more nodes and the roles assigned to them, you shall -Refer to [this webpage](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster) and create a configuration to deploy a cluster with one `control-plane` node and several worker nodes. You can choose the number of worker nodes, but please stay <= 10. +1. Write a `kind-config.yaml` configuration file following the [advanced method](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster), which specifies a `control-plane` node and a couple `worker` nodes +2. Run `kind create cluster --config kind-config.yaml` + +To confirm that the cluster was correctly provisioned, run: +```bash +kind get clusters +kind get nodes +``` ## Part 3: Interacting with the Cluster ### 1. Installing Kubectl -To interact with this cluster, let's install `kubectl`, the main command line tool to interact with kubernetes clusters. + +To interact with your cluster, let's first install the official K8s CLI tool `kubectl`: ```bash # Download the binary @@ -108,13 +114,19 @@ kubectl version --client ### 2. Check the Nodes -- How many nodes are deployed? Are they all working? +Run ```bash -kubectl get nodes +kubectl get nodes -o wide ``` -### 3. Check the Cluster +- How many nodes are deployed? +- Are they all working? Try to ping them +- What's the cluster's overlay IP network? +- Compare with the output of the command `ip addr`: what kind of host-level network is the overlay? +- Are there any pods running? + +### 2. Check the Cluster A K8s cluster is much more than its nodes. Check all the moving parts of your cluster. It should be empty for now, but use this command later to verify if your deployment is successful: @@ -169,7 +181,7 @@ Good, now we can create deployment and service file, which in this case, we'll h ### 2. Deployment and Service File -Finally! Here is the little app we are going to deploy. Create a YAML file in your VM with the following code: +Finally! Here is the little app we are going to deploy. Create a YAML deployment file `lb-deployment.yaml` in your VM with the following content: ```yaml # deployment-service.yaml @@ -206,7 +218,7 @@ spec: apiVersion: v1 kind: Service metadata: - name: LoadBalancer + name: loadbalancer spec: type: LoadBalancer selector: @@ -223,23 +235,29 @@ The service is of type `LoadBalancer`, and looks for pods with the `app: http-ec To deploy: ```bash -kubectl apply -f <FILENAME>.yaml +kubectl apply -f lb-deployment.yaml ``` ### 3. I deployed, now what? You deployed, now what? Well, now you are going to do a bash program to constantly `curl` the load balancer. -First, check the External IP of the load balancer: +First, check the **External IP** of the load balancer: ```bash kubectl get service http-echo-service ``` -Then, write your program. Make sure to print out the response from `curl`. -Remember to change the permissions of your code before running (`chmod +x ....`) +Then, write a shell script that sends some (at least 10) HTTP requests in a loop via `curl`. -Run your code, what do you see? (PS: it will take some time to show both of the instances, as this Load Balancer is not really a Round-Robin style Load Balancer) +Run your script: it should show HTTP reponses from two different IP addresses. It might take some time to show output from both instances, as metallb is not a round-robin-style load balancer. + +Now, compare the source IPs of the reponses with the loadbalancer's public IP. Why the responses come from a network different than the loadbalancer's? + +Then, run the following command +```bash +kubectl get pods -o wide +``` ## Part 6: Destroying the Cluster