diff --git a/README.md b/README.md
index 42523be86d1a295e57fd73e3e8a7147b071a8c0a..96cb8d8e084dfc93b89208b210c3eeec87f42bc5 100644
--- a/README.md
+++ b/README.md
@@ -9,12 +9,12 @@ This exercise will guide you through the process of provisioning and managing a
 Tasks:
 
 1. [Install KinD](https://kind.sigs.k8s.io/docs/user/quick-start#installation) on your instance.
-2. Provision a [KinD cluster](https://kind.sigs.k8s.io/docs/user/quick-start#creating-a-cluster) with the base image.
-3. Interact with the cluster to understand its components.
-4. Modify the cluster [configuration](https://kind.sigs.k8s.io/docs/user/configuration/) to add worker nodes.
-5. Reprovision the cluster and verify the new setup.
-6. Deploy a microservice with a load balancer and test it.
-7. Tear down the cluster and snapshot the instance.
+1. Provision a [KinD cluster](https://kind.sigs.k8s.io/docs/user/quick-start#creating-a-cluster) with the base image.
+1. Interact with the cluster to understand its components.
+1. Modify the cluster [configuration](https://kind.sigs.k8s.io/docs/user/configuration/) to add worker nodes.
+1. Reprovision the cluster and verify the new setup.
+1. Deploy a microservice with a load balancer and test it.
+1. Tear down the cluster and snapshot the instance.
 
 ## Prerequesites
 
@@ -252,13 +252,48 @@ Then, write a shell script that sends some (at least 10) HTTP requests in a loop
 
 Run your script: it should show HTTP reponses from two different IP addresses. It might take some time to show output from both instances, as metallb is not a round-robin-style load balancer.
 
-Now, compare the source IPs of the reponses with the loadbalancer's public IP. Why the responses come from a network different than the loadbalancer's?
+Now, compare the source IPs of the reponses with the loadbalancer's public IP. :question: Why the responses come from a network different than the loadbalancer's?
 
-Then, run the following command
+
+### 4. Further cluster manipulations
+
+Run the following command
 ```bash
 kubectl get pods -o wide
 ```
 
+:question: How many worker nodes host the 2 application pods?
+
+Of course, just one, because we declared our deployment `nodeSelector` to use a specific node. That's quite inflexible.
+
+:question: How to use more nodes? [Use node labels](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node):
+- add the label "application=http-echo" to your worker nodes;
+- verify the labeling with the command:
+  ```bash
+kubectl get nodes --show-labels
+  ```
+- modify your `lb-deployment.yaml` so that the `nodeSelector` reference the above label instead of the node name;
+- redeploy with the command 
+  ```bash
+kubectl replace -f lb-deployment.yaml`  
+  ```
+- verify that each pod is deployed to a different node.
+
+:question: How many pods are actually deployed? You say 2? Wrong!
+
+Run the command:
+```bash
+kubectl get pods --all-namespaces -o wide
+```bash
+
+:bulb: Note that the different K8s subsystems 
+- are deployed over several pods and grouped in different *namespaces*;
+- some pods use the inter-node overlay network `172.18.x.y` or similar (i.e., the host **routable** bridge network) while others use the inter-node network `10.244.x.y` or similar which is not exposed to the host.
+
+Try again your curl script to sollicit the load balancer. Apart from the destination IP addresses nothing should have changed.
+
+:question: Is the load balancer distributing evenly (or almost) the requests? **Extend your curl script to verify.**
+
 ## Part 6: Destroying the Cluster
 
 Destroy you cluster.