Skip to content
Snippets Groups Projects
Commit 71aa4cea authored by francisc.mendonca's avatar francisc.mendonca
Browse files

Created 3 folders, one for each exercise. Updated README to have the 3 exercises.

parent 486d8e7b
No related branches found
No related tags found
No related merge requests found
# Docker Container Basics: Understanding Bridge and Overlay Networks # Docker Tutorial
## Objectives ## Exercise 1: Deploying Docker Containers Using Docker-Compose
### Objectives
- Understanding how to build docker containers
- Understanding how to create Dockerfiles and Docker Compose
- Learning how to deploy Docker Containers using Docker-Compose
### Before Starting
- Install [Docker](https://docs.docker.com/engine/install/)
- Optional: Create account on [Docker Hub](https://hub.docker.com)
### Building Docker Containers
In this exercise, we’ll use three containers:
1. **MQTT Broker**: Acts as a message server. We'll use the Mosquitto Broker for this exercise.
2. **Publisher Application**: Always publishes data to the broker.
3. **Subscriber Application**: Subscribes to the MQTT broker to receive the data sent by the publisher.
We'll start by creating "Dockerfiles" for the Publisher and Subscriber applications. A Dockerfile is like a recipe that tells Docker how to build a container.
#### Step 1: Creating the Dockerfiles
*Creating Publisher Dockerfile*
```yaml
FROM python:3.6
WORKDIR /Publisher
RUN pip install paho-mqtt
COPY publisher.py .
CMD ["python", "-u", "publisher.py"]
```
Explanation:
- `FROM python:3.6`: This tells Docker to use Python version 3.6 as the base for the container.
- `WORKDIR /Publisher`: Sets the working directory inside the container.
- `RUN pip install paho-mqtt`: Installs the paho-mqtt package, which is used to connect to the MQTT broker.
- `COPY publisher.py .`: Copies a file named publisher.py (your application code) into the container.
- `CMD ["python", "-u", "publisher.py"]`: This is the command that runs when the container starts. It runs the Python script.
*Creating Subscirber Dockerfile*
```yaml
FROM python:3.6
WORKDIR /Receiver
RUN pip install paho-mqtt
COPY receiver.py .
CMD ["python", "-u", "receiver.py"]
```
This Dockerfile is almost the same as the Publisher's, except it copies and runs a file named receiver.py.
#### Step 2: Building Docker Container
Next, we'll use the Dockerfiles to build the containers. Open a terminal (or command prompt) and navigate to the folders you created:
##### Building the Publisher Container
Run this command to build the container:
```bash
docker build -t my-publisher:1.0 .
```
Explanation:
- `-t my-publisher:1.0` gives the container a name (my-publisher) and version (1.0).
- The `.` at the end tells Docker to use the current directory to find the Dockerfile.
#### Building the Subscriber Container
Run this command to build the container:
```bash
docker build -t my-receiver:1.0 .
```
#### Optional: Pushing the containers to Docker Hub
If you want to upload these containers to Docker Hub (so others can use them), follow these steps:
1. Login to Docker Hub:
```bash
docker login
```
2. Push the Publisher Container
```bash
docker tag my-publisher:1.0 your-dockerhub-username/publisher:1.0
docker push your-dockerhub-username/publisher:1.0
```
3. Push the Subscriber Container:
```bash
docker tag my-receiver:1.0 your-dockerhub-username/receiver:1.0
docker push your-dockerhub-username/receiver:1.0
```
Replace `your-dockerhub-username` with your actual Docker Hub username.
### Deploying Containers Manually
Now, let’s start the containers one by one using simple Docker commands.
#### Step 1: Start the MQTT Broker
Run this command in your terminal:
```bash
docker run -d --name mqtt-broker eclipse-mosquitto:1.6
```
- `-d`: Runs the container in the background.
- `--name mqtt-broker`: Gives the container a name for easy reference.
- `eclipse-mosquitto:1.6`: Uses the MQTT broker image from Docker Hub.
#### Step 2: Start the Publisher Application
Run this command:
```bash
docker run -d --name publisher my-publisher:1.0
```
#### Step 3: Start the Subscriber Application
Run this command:
```bash
docker run -d --name subscriber my-receiver:1.0
```
Now, your Publisher and Subscriber containers are running and connected to the MQTT broker!
### Deploying Containers Using Docker Compose
**Docker Compose** lets you define and start all the containers in one step.
#### Step 1: Create the Docker Compose File
Create a file named `docker-compose.yaml` in a new folder. Add the following content:
```yaml
version: '3.0'
services:
broker:
image: 'eclipse-mosquitto:1.6'
publisher:
image: 'my-publisher:1.0'
receiver:
image: 'my-receiver:1.0'
```
Explanation:
- **`version: '3.0'`**: Specifies the version of Docker Compose being used. Version 3.0 is compatible with the latest Docker Compose features.
- **`services:`**: This section defines the different containers (services) that will be created.
- **`broker:`**: This defines a service named "broker".
- **`image: 'eclipse-mosquitto:1.6'`**: Uses the official 'eclipse-mosquitto' image, which is an MQTT broker. Version 1.6 of the image is specified.
- **`publisher:`**: Defines a service named "publisher".
- **`image: 'my-publisher:1.0'`**: Uses an image named 'my-publisher' with version '1.0'. This is the custom image you created for the publisher application.
- **`receiver:`**: Defines a service named "receiver".
- **`image: 'my-receiver:1.0'`**: Uses an image named 'my-receiver' with version '1.0'. This is the custom image you created for the receiver application.
In short, this YAML file sets up three services: a broker (MQTT server), a publisher, and a receiver, each running in its own container. When you run `docker-compose up`, Docker Compose will automatically create and start these three containers.
#### Step 2: Deploy the Containers with Docker Compose
In the terminal, navigate to the folder containing `docker-compose.yaml` and run:
```bash
docker-compose up -d
```
- `up`: Creates and starts the containers.
- `-d`: Runs the containers in the background.
#### Step 3: Stopping and Removing the Containers
To stop and remove the containers, use:
```bash
docker-compose down
```
### Summary
- You learned how to create containers using **Dockerfiles** and built them using Docker commands.
- You manually deployed the containers using `docker run` commands.
- You used **Docker Compose** to define and deploy multiple containers with a simple YAML file.
Docker Compose is a handy tool that makes managing multiple containers easy, especially when they need to work together, like in this example. Now you have the basics to start exploring more complex containerized applications!
## Exercise 2 and 3: Docker Container Basics - Understanding Bridge and Overlay Networks
### Objectives
- Learn the differences between **Bridge** and **Overlay** networks in Docker. - Learn the differences between **Bridge** and **Overlay** networks in Docker.
- Understand how containers in different networks can communicate. - Understand how containers in different networks can communicate.
## Introduction to Docker Networks ### Introduction to Docker Networks
Docker networks allow containers to communicate with each other. There are different types of networks, but we'll focus on two: Docker networks allow containers to communicate with each other. There are different types of networks, but we'll focus on two:
...@@ -13,18 +207,18 @@ Docker networks allow containers to communicate with each other. There are diffe ...@@ -13,18 +207,18 @@ Docker networks allow containers to communicate with each other. There are diffe
2. **Overlay Network:** Designed for Docker Swarm, which allows containers on different physical or virtual machines to communicate as if they are on the same network. 2. **Overlay Network:** Designed for Docker Swarm, which allows containers on different physical or virtual machines to communicate as if they are on the same network.
### When to Use Each Network #### When to Use Each Network
- **Bridge Network:** Use this when all your containers are on the same machine. - **Bridge Network:** Use this when all your containers are on the same machine.
- **Overlay Network:** Use this when your containers are spread across multiple machines (in a Docker Swarm). - **Overlay Network:** Use this when your containers are spread across multiple machines (in a Docker Swarm).
Now, let's look at some examples to make this clearer. Now, let's look at some examples to make this clearer.
## Example 1: Using a Bridge Network ### Exercise 1: Using a Bridge Network
In this example, we’ll set up a simple application with three containers: one RabbitMQ service and two other services (app1 and app2). We'll connect these services using two separate bridge networks. In this example, we’ll set up a simple application with three containers: one RabbitMQ service and two other services (app1 and app2). We'll connect these services using two separate bridge networks.
### Bridge Network Docker Compose File #### Bridge Network Docker Compose File
Here’s the `docker-compose.yaml` file for setting up our containers: Here’s the `docker-compose.yaml` file for setting up our containers:
...@@ -74,7 +268,7 @@ networks: ...@@ -74,7 +268,7 @@ networks:
attachable: true attachable: true
``` ```
### Running the Bridge Network Example #### Running the Bridge Network Example
1. Save the above YAML code in a file named `docker-compose.yaml`. 1. Save the above YAML code in a file named `docker-compose.yaml`.
2. Open your terminal and navigate to the directory containing `docker-compose.yaml`. 2. Open your terminal and navigate to the directory containing `docker-compose.yaml`.
...@@ -93,17 +287,17 @@ networks: ...@@ -93,17 +287,17 @@ networks:
This setup allows `app1` and `app2` to communicate with `rabbitmq` because they are attached to networks (`app-network-1` and `app-network-2`). However, since they are on different bridge networks, they can't directly communicate with each other unless we set up additional network rules. This setup allows `app1` and `app2` to communicate with `rabbitmq` because they are attached to networks (`app-network-1` and `app-network-2`). However, since they are on different bridge networks, they can't directly communicate with each other unless we set up additional network rules.
## Example 2: Using an Overlay Network in Docker Swarm ### Exercise 2: Using an Overlay Network in Docker Swarm
### What Is an Overlay Network? #### What Is an Overlay Network?
An overlay network allows containers to communicate across different machines (nodes) in a Docker Swarm. You can think of it as a virtual network that spans across multiple Docker hosts. An overlay network allows containers to communicate across different machines (nodes) in a Docker Swarm. You can think of it as a virtual network that spans across multiple Docker hosts.
### Prerequisites for Overlay Networks #### Prerequisites for Overlay Networks
You need a Docker Swarm cluster with at least three virtual machines (VMs). Let’s call them **VM1**, **VM2**, and **VM3**. You can use cloud services (like AWS or Azure) or local VMs. You need a Docker Swarm cluster with at least three virtual machines (VMs). Let’s call them **VM1**, **VM2**, and **VM3**. You can use cloud services (like AWS or Azure) or local VMs.
### Overlay Network Docker Compose File #### Overlay Network Docker Compose File
Here’s the `docker-compose.yaml` for the overlay network setup: Here’s the `docker-compose.yaml` for the overlay network setup:
...@@ -153,7 +347,7 @@ networks: ...@@ -153,7 +347,7 @@ networks:
attachable: true attachable: true
``` ```
### How to Set Up and Run the Overlay Network Example #### How to Set Up and Run the Overlay Network Example
1. **Create Virtual Machines (VMs):** Set up three VMs (VM1, VM2, VM3). 1. **Create Virtual Machines (VMs):** Set up three VMs (VM1, VM2, VM3).
2. **Install Docker on Each VM:** 2. **Install Docker on Each VM:**
......
version: '3.0'
services:
broker:
image: 'eclipse-mosquitto:1.6'
publisher:
image: 'franciscomendonca/publisher:1.0.4'
receiver:
image: 'franciscomendonca/receiver:1.0.3'
\ No newline at end of file
FROM python:3.6
WORKDIR /Publisher
RUN pip install paho-mqtt
COPY publisher.py .
CMD ["python", "-u", "publisher.py"]
FROM python:3.6
WORKDIR /Receiver
RUN pip install paho-mqtt
COPY receiver.py .
CMD ["python", "-u", "receiver.py"]
import paho.mqtt.client as mqtt
import os
import time
def on_connect(client, userdata, flags, rc):
if rc==0:
print("Connection Successful! Returned code =",rc)
else:
print("Unable to Connect. Returned code =", rc)
def sender(client):
while True:
client.publish('comms', 'MQTT Secret Message')
print('Published on topic comms')
time.sleep(10)
if __name__ == "__main__":
client = mqtt.Client()
client.on_connect = on_connect
client.connect('broker', 1883, 60)
sender(client)
import paho.mqtt.client as mqtt
import os
import time
def on_connect(client, userdata, flags, rc):
if rc==0:
print("Connection Successful! Returned code =",rc)
else:
print("Unable to Connect. Returned code =", rc)
def on_message(client, userdata, msg):
print(msg.topic + ': ' + str(msg.payload.decode('utf-8')))
if __name__ == "__main__":
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.connect('broker', 1883, 60)
client.subscribe('comms')
client.loop_forever()
File moved
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
ports:
- "5672:5672" # RabbitMQ default messaging port
- "15672:15672" # RabbitMQ management console
networks:
- app-network-1
- app-network-2
app1:
image: franciscomendonca/auto-messaging:1.0.1
restart: on-failure
environment:
- START_WITH_MESSAGE=true # Start by sending a message
- RABBITMQ_HOST=rabbitmq
depends_on:
- rabbitmq
networks:
- app-network-1
app2:
image: franciscomendonca/auto-messaging:1.0.1
container_name: app2
restart: on-failure
environment:
- START_WITH_MESSAGE=false # Wait for a message before responding
- RABBITMQ_HOST=rabbitmq
depends_on:
- rabbitmq
networks:
- app-network-2
networks:
app-network-1:
driver: bridge
attachable: true
app-network-2:
driver: bridge
attachable: true
\ No newline at end of file
File moved
File moved
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set environment variables to reduce Python's buffer for more efficient logging
ENV PYTHONUNBUFFERED=1
# Create a directory for the app
WORKDIR /app
# Copy the current directory (where your script is) to /app in the container
COPY . /app
# Install requirements - NOTE: --no-cache-dir prevents pip from storing downloaded files in local cache (reduces image size).
RUN pip install --no-cache-dir -r requirements.txt
# Set the default command to run the Python script
CMD ["python", "main.py"]
import pika
import time
import random
import os
import logging
import uuid
# Set up logging
logging.basicConfig(level=logging.INFO)
# Set up connection parameters
rabbitmq_host = os.getenv('RABBITMQ_HOST') # RabbitMQ container name
# Check if the RabbitMQ host is set
if not rabbitmq_host:
logging.error("RABBITMQ_HOST environment variable is not set")
exit(1)
logging.info(f"RabbitMQ host: {rabbitmq_host}")
connection_params = pika.ConnectionParameters(host=rabbitmq_host)
# Function to send messages
def send_message(channel, message):
# Send a message back to the queue
channel.basic_publish(exchange='',
routing_key='test_queue',
body=message)
print(f" [x] Sent '{message}'")
# Function to handle received messages and send a response
def on_message_received(ch, method, properties, body):
received_message = body.decode()
print(f" [x] Received '{received_message}'")
# Create a response message
response_message = f"Response to '{received_message}' from {random.randint(1, 1000)}"
# Simulate some processing time
time.sleep(2)
# Send the response message
send_message(ch, response_message)
# Function to start the auto-messaging system
def start_auto_messaging(start_with_message):
# Set up a connection and channel
logging.info(f"Establishing connection: {connection_params}")
connection = pika.BlockingConnection(connection_params)
channel = connection.channel()
logging.info("Connection established")
# Declare a queue
channel.queue_declare(queue='test_queue')
# Check if the system should start by sending a message
if start_with_message:
logging.info("Starting with an initial message")
initial_message = f"Initial message from {random.randint(1, 1000)}"
send_message(channel, initial_message)
# Start consuming and handle each message with the on_message_received function
channel.basic_consume(queue='test_queue',
on_message_callback=on_message_received,
auto_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
if __name__ == '__main__':
# Read environment variable to check if the system should start by sending a message
start_with_message = os.getenv('START_WITH_MESSAGE', 'false').lower() == 'true'
logging.info(f"Does it start messaging: {start_with_message}")
# Start the auto-messaging system
start_auto_messaging(start_with_message)
pika
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment