diff --git a/doc/rapport.gpp.md b/doc/rapport.gpp.md
index 75526fb5928fadb451fbb10be217a27268e90cc1..e776b9a7e40851227d4d7c3bc7bc834816d9b01d 100644
--- a/doc/rapport.gpp.md
+++ b/doc/rapport.gpp.md
@@ -1756,6 +1756,22 @@ dossier de l'image. Ce fichier sera directement copié dans
 l'!!acronym{ESP}, sans qu'une tentative de détermination automatique du
 point d'entrée à partir du fichier `nvram.dat` soit effectuée.
 
+## Remplacement de _Vagrant_ par _Docker_
+
+L'utilisation de _Vagrant_ mise en place au début du projet s'est
+révélée être un peu lourde à utiliser et peu adaptée au déploiement sur
+un serveur en production. L'unique machine _Vagrant_ a donc été
+remplaçée par trois conteneurs Docker: un pour le serveur
+!!acronym{DHCP} (voir source dans la section !!ref{dhcp_dockerfile}), un
+pour le serveur !!acronym{TFTP} (voir source dans la section
+!!ref{tftp_dockerfile}) et un pour le serveur !!acronym{NFS} (voir
+source dans la section !!ref{nfs_dockerfile}).
+
+Le lancement de ces trois machines en simultané ainsi que leur
+configuration est assuré par l'outil _docker-compose_ (voir source dans
+la section !!ref{source_docker_compose}).
+
+
 ## Personnalisation d'images post-déploiement
 
 Un système a été conçu pour que l'utilisateur puisse choisir des
@@ -1835,78 +1851,6 @@ La personnalisation **2** est passive: elle remplace juste un fichier
 mais ne nécessite pas de mécanisme spécial sur la machine de déploiement
 pour l'exécuter.
 
-!!ifdef{PRINT_TODO}
-# Architecture finale du projet
-
-## Composants
-
-**TODO: diagramme des composants du projet et description**
-
-## Processus de déploiement
-
-**TODO: diagramme du processus de déploiement et description complète de
-ce dernier. Explication détaillées de la structure des scripts de
-déploiement, de leurs possibilités et de leurs limitations.**
-
-## Architecture réseau
-
-**TODO: description de l'architecture réseau finale utilisée pour
-développer et tester le système avec diagramme. Expliquer comment cette
-architecture de test peut être transposée à une architecture de
-production.**
-
-## Processus de build
-
-**TODO: expliquer comment sont construites les différentes parties du
-projet. Expliquer les différentes parties du makefile et les différentes
-images docker utilisées pour construire les différents composants.**
-
-## Déploiement du serveur
-
-**TODO: expliquer comment le serveur peut fonctionner à l'aide d'un
-`docker-compose` réunissant les serveurs TFTP, NFS et DHCP. Expliquer
-comment ces différentes parties sont utilisés dans un environnement de
-développement, les limitations, et comment adapter cette configuration à
-un environnement de production.**
-
-## Structure des images
-
-**TODO: expliquer exactement comment sont structurées les données des
-images (raw et clonezilla).**
-
-# Utilisation du système déployé
-
-## Déployer une image sur un poste
-
-Cette section contient une marche à suivre qui décrit les étapes pour
-déployer une image sur un poste client quand un serveur est déjà
-installé et configuré, que la machine est configurée pour démarrer sur
-le réseau.
-
-1. Démarrer la machine et attendre qu'elle
-
-**TODO: expliquer pas à pas comment utiliser l'interface pour déployer
-une image sur un client et les différentes inter-actions de
-l'utilisateur, avec des captures d'écran, etc.**
-
-## Création d'une image clonezilla
-
-**TODO: expliquer pas à pas comment créer une image avec clonezilla et
-comment l'ajouter au serveur de déploiement.**
-
-## Création d'une image dd
-
-**TODO: expliquer pas à pas comment créer une image avec clonezilla et
-comment l'ajouter au serveur de déploiement.**
-
-## Préparation d'une image pour qu'elle supporte le système de personnalisation
-
-**TODO: expliquer comment préparer un système pour qu'il supporte le
-système de personnalisation avec ansible, et comment créer les scripts
-de personnalisation, avec un exemple type simple.**
-
-!!endif{PRINT_TODO}
-
 # Conclusion
 
 ## Synthèse du travail effectué
@@ -2058,6 +2002,12 @@ programmation des systèmes (_sIT_242_), systèmes d'exploitation
 avancés (_sIT_384_), programmation avancée des systèmes (_sIT_632_) et
 virtualisation des !!acronym{SI} (_sIT_632_).
 
+Le projet ne s'est pas déroulé exactement comme prévu: le chemin était
+parsemé d'embuches et de problèmes qui ont fait que la plupart des
+étapes ont pris plus de temps que prévu. Je suis cependant très
+satisfait d'avoir pu fournir un système fonctionnel et efficace à la fin
+de ce projet.
+
 !!ifdef{PANDOC_PDF}
 
 !!chapterwithoutnumber{Références}
@@ -2074,7 +2024,7 @@ Cette annexe contient les listings des codes sources les plus importants
 du projet.
 
 !!sourcefile{Makefile}{makefile}{configuration _GNU Make_ du projet}
-!!sourcefile{docker-compose.yml}{yaml}{configuration _docker-compose_ du serveur de déploiement}
+!!sourcefile{docker-compose.yml}{yaml}{configuration _docker-compose_ du serveur de déploiement !!label{source_docker_compose}}
 !!sourcefile{deployer/Dockerfile}{dockerfile}{configuration _Docker_ pour la construction de l'OS de déploiement !!label{deployer_dockerfile}}
 !!sourcefile{deployer/bootiful-deploy-log.service}{ini}{configuration de l'unité _Systemd_ des script de déploiement}
 !!sourcefile{deployer/initramfs.conf}{bash}{configuration pour création d'un _initramfs_ pour démarrer avec !!acronym{NFS}}
@@ -2085,12 +2035,12 @@ du projet.
 !!sourcefile{deployer/bootiful-deploy}{bash}{script de déploiement d'images}
 !!sourcefile{deployer/bootiful-save-image}{bash}{script utilitaire de création d'image raw}
 !!sourcefile{deployer/bootiful-reset-cache}{bash}{script utilitaire de réinitialisation du cache}
-!!sourcefile{dhcp/Dockerfile}{dockerfile}{configuration _Docker_ du serveur !!acronym{DHCP}}
+!!sourcefile{dhcp/Dockerfile}{dockerfile}{configuration _Docker_ du serveur !!acronym{DHCP} !!label{dhcp_dockerfile}}
 !!sourcefile{dhcp/dhcpd.conf}{bash}{configuration du serveur !!acronym{DHCP}}
 !!sourcefile{grub/Dockerfile}{dockerfile}{configuration _Docker_ pour la compilation de !!acronym{GRUB}}
-!!sourcefile{nfs/Dockerfile}{dockerfile}{configuration _Docker_ du serveur !!acronym{NFS}}
+!!sourcefile{nfs/Dockerfile}{dockerfile}{configuration _Docker_ du serveur !!acronym{NFS} !!label{nfs_dockerfile}}
 !!sourcefile{nfs/exports}{bash}{configuration des partages du serveur !!acronym{NFS}}
-!!sourcefile{tftp/Dockerfile}{dockerfile}{configuration _Docker_ du serveur !!acronym{TFTP}}
+!!sourcefile{tftp/Dockerfile}{dockerfile}{configuration _Docker_ du serveur !!acronym{TFTP} !!label{tftp_dockerfile}}
 !!sourcefile{tftp/tftpd-hpa}{bash}{configuration du serveur !!acronym{TFTP}}
 !!sourcefile{tftp/tftpboot/boot/grub/grub.cfg}{bash}{configuration de !!acronym{GRUB} servie par !!acronym{TFTP}}
 !!sourcefile{postdeploy/bootiful-postdeploy}{bash}{script de post-déploiement qui exécute les playbooks _Ansible_ présents dans un dossier !!label{source_ansible_run}}
diff --git a/doc/rapport.md b/doc/rapport.md
index 97ba90d445365405f07c908c83387e9500dc64fa..2b7986895079976aa6fe9afeb99c704ce26e1ae2 100644
--- a/doc/rapport.md
+++ b/doc/rapport.md
@@ -1864,6 +1864,22 @@ dossier de l'image. Ce fichier sera directement copié dans
 l'<abbr title="EFI System Partition: partition système EFI ">ESP</abbr>, sans qu'une tentative de détermination automatique du
 point d'entrée à partir du fichier `nvram.dat` soit effectuée.
 
+## Remplacement de _Vagrant_ par _Docker_
+
+L'utilisation de _Vagrant_ mise en place au début du projet s'est
+révélée être un peu lourde à utiliser et peu adaptée au déploiement sur
+un serveur en production. L'unique machine _Vagrant_ a donc été
+remplaçée par trois conteneurs Docker: un pour le serveur
+<abbr title="Dynamic Host Configuration Protocol: protocole de configuration dynamique des hôtes ">DHCP</abbr> (voir source dans la section ), un
+pour le serveur <abbr title="Trivial File Transfer Protocol: protocole simplifié de transfert de fichiers ">TFTP</abbr> (voir source dans la section
+) et un pour le serveur <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr> (voir
+source dans la section ).
+
+Le lancement de ces trois machines en simultané ainsi que leur
+configuration est assuré par l'outil _docker-compose_ (voir source dans
+la section ).
+
+
 ## Personnalisation d'images post-déploiement
 
 Un système a été conçu pour que l'utilisateur puisse choisir des
@@ -1937,5 +1953,2251 @@ Les personnalisations **1** et **3** nécessitent que l'image soit
 dans `/etc/bootiful/postdeploy-playbooks` au démarrage. Des exemples
 d'un tel script et de l'unité _systemd_ servant à le lancer au démarrage
 sont consultables dans les sections  et
-.'w''''
+.
+
+La personnalisation **2** est passive: elle remplace juste un fichier
+mais ne nécessite pas de mécanisme spécial sur la machine de déploiement
+pour l'exécuter.
+
+# Conclusion
+
+## Synthèse du travail effectué
+
+Le travail initial a été récupéré, analysé, documenté et remis en
+fonctionnement en corrigeant quelques problèmes dans les fichiers de
+configuration et les scripts de déploiement. L'outil _Vagrant_ a été
+utilisé pour automatiser la création d'un serveur virtuel qui exécute
+les différents services du système.
+
+Le système a été ensuite modifié pour le rendre compatible avec les
+ordinateurs modernes utilisant le standard <abbr title="Unified Extensible Firmware Interface: interface micrologicielle extensible unifiée ">UEFI</abbr> au lieu d'un
+<abbr title="Basic Input Output System: système de base d’entrée sortie ">BIOS</abbr>.
+
+Des mesures ont été effectuées pour déterminer si le choix du protocole
+<abbr title="Network File System: système de fichiers en réseau ">NFS</abbr>, le protocole réseau utilisé pour transférer les images,
+était pertinent. Il été comparé aux protocoles <abbr title="Secure CoPy: protocole de copie sécurisée sur le réseau ">SCP</abbr>,
+<abbr title="HyperText Transfer Protocol: protocole de transfert hypertexte ">HTTP</abbr>, <abbr title="File Transfer Protocol: protocole de transfert de fichier ">FTP</abbr>, <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr>, <abbr title="Server Message Block ">SMB</abbr> et
+<abbr title="InterPlanetary File System: système de fichier inter-planétaire ">IPFS</abbr> en mesurant le temps de transfert d'une image. Ces
+mesures ont montré que les temps de déploiement avec ces différents
+protocoles étaient similaires, à l'exception d'<abbr title="InterPlanetary File System: système de fichier inter-planétaire ">IPFS</abbr> qui
+prenait beaucoup plus de temps. Le protocole <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr> a donc été
+gardé car les mesures ont prouvé que ses performances étaient similaires
+à ses alternatives.
+
+Un format d'images alternatif aux images brutes compressées a été
+recherché. Une solution produisant des images plus petites et
+déployables plus rapidement a été trouvée: utiliser des images produites
+avec l'outil _Clonezilla_. Cet outil permet aussi de faciliter la
+création d'images en utilisant sa version _live_. Des mesures prouvant
+que ce système de déploiement d'images offre un réel avantage en espace
+utilisé et en temps de déploiement par rapport aux images brutes
+compressées.
+
+Le système d'exploitation utilisé pour effectuer le déploiement a été
+complètement remplacé. Initialement construit avec _Buildroot_, avec un
+système de fichiers racine chargé entièrement en mémoire, il a été
+remplacé par un système _Debian_ dont le système de fichiers racine est
+monté depuis un partage <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr> en lecture seule. Ce système
+permet une installation aisée de tous les paquets _Debian_, notamment
+_Clonezilla_ qui était compliqué à installer sur un système construit
+avec _Buildroot_. La création de ce système d'exploitation a été
+automatisée et encapsulée dans un conteneur _Docker_ pour rendre sa
+modification et reconstruction très simple, indépendamment du système
+d'exploitation utilisé.
+
+Les scripts de déploiement du système initial ont été réécrits. Une
+gestion avancée des erreurs a été implémentée. Un système de mesure et
+d'affichage du temps d'exécution de chaque partie du déploiement a été
+développé. La génération de fichiers de log a été améliorée pour qu'ils
+fournissent plus d'informations utiles et qu'ils soient structurés de
+manière à rendre leur gestion et leur consultation plus aisée. Le
+déploiement d'images avec _Clonezilla_ a été implémenté, tout en gardant
+la possibilité d'utiliser des images brutes compressées pour offrir plus
+de flexibilité et permettre de comparer les deux systèmes. Le support
+des images de systèmes d'exploitations compatibles <abbr title="Unified Extensible Firmware Interface: interface micrologicielle extensible unifiée ">UEFI</abbr> a
+aussi été implémenté, notamment en permettant de déterminer
+automatiquement l'exécutable <abbr title="Extensible Firmware Interface: interface micrologicielle extensible unifiée ">EFI</abbr> qui devra être lancé pour
+démarrer le système déployé.
+
+Enfin, un système de personnalisation d'un système après son déploiement
+a été conçu, permettant de définir des configurations que l'utilisateur
+peut choisir de copier ou non sur le système déployé. Une image de
+système d'exploitation peut être conçue pour exécuter les fichiers
+copiés par ces personnalisations dans un dossier spécifique, pour
+permettre des configurations complexes exécutées lors du démarrage avec
+un outil choisi, par exemple _Ansible_. Ce système offre beaucoup de
+flexibilité et permet de définir des configurations avancées tout en
+gardant le système de déploiement indépendant de ce qui est utilisé pour
+les exécuter, car le seul mécanisme présent dans le système de
+déploiement est la copie de fichiers sur une partition donnée.
+
+## Améliorations possibles
+
+Bien que le système à la fin de ce projet soit parfaitement fonctionnel,
+il existe des points qui devraient ou pourraient être étudiés, améliorés
+ou implémentés pour que le système soit prêt à une utilisation dans un
+cadre réel.
+
+### Amélioration des performances de téléchargement des images sur un réseau réel avec de nombreux clients simultanés
+
+Le protocole réseau utilisé pour le téléchargement des images a été
+testé avec un client unique, sur un réseau minimal ne comprenant que le
+client et le serveur. Il est très probable que le temps de
+téléchargement des images soit beaucoup plus long que celui mesuré dans
+le cadre de ce travail si de nombreux clients téléchargent des images en
+même temps, dans un réseau plus complexe tel que celui de l'école.
+
+L'étude et la mitigation de cette problématique a volontairement été
+mise de côté dans le cadre de ce travail car il était difficile de
+simuler les conditions réelles du système déployé dans une école alors
+que tout le travail a été effectué hors de l'école, dans un petit réseau
+local domestique avec une quantité limitée d'ordinateurs à disposition
+pour faire des tests.
+
+Différentes approches pourraient être utilisées pour améliorer ces
+performances, telles que la copie en _multicast_ des images, la
+redondance des serveurs mettant à disposition les images ou encore
+l'utilisation d'un protocole _pair à pair_ tel que _Bittorrent_ ou
+_IPFS_ pour permettre de distribuer le transfert des images sur tous les
+clients connectés plutôt que d'utiliser uniquement le serveur central
+comme source du fichiers.
+
+### Amélioration du système de personnalisation des images
+
+Le système de personnalisation des images a été implémenté tardivement
+dans le travail. Bien qu'il soit fonctionnel, il n'a pas été testé
+extensivement. Il faudrait passer un peu de temps à créer des
+configurations post-déploiement complexes qui sont exécutées au
+démarrage par le système déployé et de valider leur fonctionnement.
+
+Par exemple, il était prévu de tester l'exécution de _playbooks_
+_Ansible_ au démarrage du système, copiés selon les choix de
+personnalisation de l'utilisateur. Ces _playbooks_ pourraient étre
+utilisés pour effectuer de nombreuses actions, telles que l'installation
+de groupes de paquets choisis, la génération d'un nom d'hôte unique pour
+la machine et la connexion à un domaine _Active Directory_ pour
+permettre aux élèves de se connecter avec le même nom d'utilisateur et
+d'avoir accès à leur dossier _home_ depuis un partage <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr>,
+comme sur les postes de travail standards présents de l'école.
+
+### Amélioration de la sécurité
+
+Sur le système actuel, un utilisateur mal intentionné peut détruire ou
+modifier les images présentes sur le serveur distant, car elles sont
+récupérées sur un partage <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr> avec des droits de
+lecture-écriture. La raison de ce partage en lecture et écriture est que
+les logs sont écrits sur le même partage. Une manière simple de résoudre
+ce problème serait de séparer ce partage en deux: le premier en lecture
+seule contiendrait les images et le second contiendrait les logs
+uniquement.
+
+Il faudrait ensuite trouver une solution pour empêcher une machine
+d'aller modifier ou supprimer les logs d'une autre machine, ce qui est
+compliqué à mettre en oeuvre avec un partage <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr>. Peut être
+que les logs pourraient être transmis au serveur en utilisant un autre
+canal de communication et de déléguer la création et le nommage du
+fichier au serveur, pour que les clients ne soient plus capables
+d'accéder directement aux fichiers des logs, mais que ce soit le serveur
+qui ait la responsabilité de les créer et de les gérer.
+
+## Rétrospection sur le déroulement du travail
+
+J'ai eu beaucoup d'intérêt à travailler sur ce sujet. Cela m'a permis de
+mettre en pratique de nombreuses compétences acquises lors des cours
+suivis les quatre années précédentes, dont en particulier les cours de
+programmation des systèmes (_sIT_242_), systèmes d'exploitation
+(_sIT_244_), réseaux et protocoles informatiques (sIT_362), réseaux
+avancés (_sIT_384_), programmation avancée des systèmes (_sIT_632_) et
+virtualisation des <abbr title="Système d’Information ">SI</abbr> (_sIT_632_).
+
+Le projet ne s'est pas déroulé exactement comme prévu: le chemin était
+parsemé d'embuches et de problèmes qui ont fait que la plupart des
+étapes ont pris plus de temps que prévu. Je suis cependant très
+satisfait d'avoir pu fournir un système fonctionnel et efficace à la fin
+de ce projet.
+
+
+
+# Codes sources notables 
+
+Cette annexe contient les listings des codes sources les plus importants
+du projet.
+
+
+
+## `Makefile`: configuration _GNU Make_ du projet
+
+```makefile
+SHELL := /bin/bash
+MAKEFLAGS += "-j 4"
+
+DOCKER_BUILDKIT_BUILD = DOCKER_BUILDKIT=1 docker build --progress=plain
+
+GRUB_SRC := $(shell find grub/bootiful-grub/ -type f -regex ".*\.[c|h|sh|py|cfg|conf]")
+GRUB_I386_PC_BIN = tftp/tftpboot/boot/grub/i386-pc/core.0
+GRUB_I386_EFI_BIN = tftp/tftpboot/boot/grub/i386-efi/core.efi
+GRUB_X86_64_EFI_BIN = tftp/tftpboot/boot/grub/x86_64-efi/core.efi
+DEPLOYER_SRC := $(wildcard deployer/*)
+TFTP_DEPLOYER_DIR = tftp/tftpboot/boot/deployer
+TFTP_DEPLOYER_VMLINUZ := $(TFTP_DEPLOYER_DIR)/vmlinuz
+TFTP_DEPLOYER_INITRD := $(TFTP_DEPLOYER_DIR)/initrd.img
+NFS_DEPLOYER_ROOT := nfs/nfsroot.tar.gz
+LATEST_LOG := $(shell ls -1 nfs/nfsshared/log/*.log | tail -n 1)
+
+.PHONY: doc grub deployer start-server reprovision-server clean print_last_log help
+
+# Builds everything
+all: doc grub deployer
+
+# Builds PDF and markdown documents
+doc:
+	$(MAKE) -C doc
+
+grub/bootiful-grub/bootstrap partclone/bootiful-partclone/Makefile.am: .gitmodules
+	git submodule init && git submodule update
+
+# Bootstraps GRUB  dependencies and configuration script
+grub/bootiful-grub/configure: grub/bootiful-grub/bootstrap
+	(pushd grub/bootiful-grub && ./bootstrap; popd)
+
+# Builds GRUB for i386-pc
+$(GRUB_I386_PC_BIN): grub/Dockerfile grub/bootiful-grub/configure $(GRUB_SRC)
+	$(DOCKER_BUILDKIT_BUILD) ./grub \
+	    --output ./tftp/tftpboot \
+	    --build-arg PLATFORM=pc \
+		  --build-arg TARGET=i386
+
+# Builds GRUB for i386-efi
+$(GRUB_I386_EFI_BIN): grub/Dockerfile grub/bootiful-grub/configure $(GRUB_SRC)
+	$(DOCKER_BUILDKIT_BUILD) ./grub \
+	    --output ./tftp/tftpboot \
+	    --build-arg PLATFORM=efi \
+	    --build-arg TARGET=i386
+
+# Builds GRUB for x86_64-efi
+$(GRUB_X86_64_EFI_BIN): grub/Dockerfile grub/bootiful-grub/configure $(GRUB_SRC)
+	$(DOCKER_BUILDKIT_BUILD) ./grub \
+	    --output ./tftp/tftpboot \
+	    --build-arg PLATFORM=efi \
+	    --build-arg TARGET=x86_64
+
+# Builds GRUB for all platforms
+grub: $(GRUB_I386_PC_BIN) $(GRUB_I386_EFI_BIN) $(GRUB_X86_64_EFI_BIN)
+
+# Builds the deployer OS
+deployer: $(TFTP_DEPLOYER_VMLINUZ) $(TFTP_DEPLOYER_INITRD) $(NFS_DEPLOYER_ROOT)
+
+$(TFTP_DEPLOYER_VMLINUZ) $(TFTP_DEPLOYER_INITRD) &: $(DEPLOYER_SRC)
+	$(DOCKER_BUILDKIT_BUILD) ./deployer --target tftp-export-stage --output $(TFTP_DEPLOYER_DIR) && \
+	touch -c $(TFTP_DEPLOYER_VMLINUZ) $(TFTP_DEPLOYER_INITRD)
+
+$(NFS_DEPLOYER_ROOT): $(DEPLOYER_SRC)
+	$(DOCKER_BUILDKIT_BUILD) ./deployer --target nfs-export-stage --output nfs/ && \
+	touch -c $(NFS_DEPLOYER_ROOT)
+
+# Starts bootiful services in docker containers
+start-server: grub deployer
+	docker-compose up --build --remove-orphans --abort-on-container-exit
+
+# Removes all generated files
+clean:
+	rm -rf deployer/rootfs
+	$(MAKE) -C doc clean
+
+# Prints the latest deployment log file
+print-latest-log:
+	cat $(LATEST_LOG)
+
+# Show this help.
+help:
+	printf "Usage:  make <target>\n\nTargets:\n"
+	awk '/^#/{c=substr($$0,3);next}c&&/^[[:alpha:]][[:alnum:]_-]+:/{print "  " substr($$1,1,index($$1,":")),c}1{c=0}' $(MAKEFILE_LIST) | column -s: -t
+
+```
+
+
+
+## `docker-compose.yml`: configuration _docker-compose_ du serveur de déploiement 
+
+```yaml
+version: "3.8"
+
+services:
+  bootiful-dhcp:
+    build: ./dhcp
+    network_mode: host
+  bootiful-tftp:
+    build: ./tftp
+    network_mode: host
+    volumes:
+      - type: bind
+        source: ./tftp/tftpboot
+        target: /tftpboot
+        read_only: yes
+  bootiful-nfs:
+    build: ./nfs
+    network_mode: host
+    privileged: yes
+    volumes:
+      - type: tmpfs
+        target: /nfsroot
+      - type: bind
+        source: /run/media/araxor/bigdata/nfsshared
+        target: /nfsshared
+      - type: bind
+        source: /lib/modules
+        target: /lib/modules
+        read_only: yes
+    environment:
+      NFS_LOG_LEVEL: DEBUG
+
+```
+
+
+
+## `deployer/Dockerfile`: configuration _Docker_ pour la construction de l'OS de déploiement 
+
+```dockerfile
+FROM debian:bullseye as build-stage
+RUN apt-get update && apt-get install -y multistrap
+
+WORKDIR /multistrap
+
+ADD ./multistrap.config ./
+RUN multistrap --arch amd64 --file ./multistrap.config --dir ./rootfs --tidy-up
+
+ADD ./hostname ./rootfs/etc/hostname
+ADD ./hosts ./rootfs/etc/hosts
+ADD ./fstab ./rootfs/etc/fstab
+ADD ./initramfs.conf ./rootfs/etc/initramfs-tools/initramfs.conf
+ADD ./bootiful-deploy-log.service ./rootfs/etc/systemd/system/bootiful-deploy-log.service
+
+ADD ./configure.sh ./rootfs/
+RUN chroot /multistrap/rootfs ./configure.sh
+
+RUN mkdir ./boot ./rootfs/bootiful ./rootfs/var/lib/clonezilla ./rootfs/home/partimag
+RUN ln -s /proc/mounts rootfs/etc/mtab
+RUN cp ./rootfs/vmlinuz ./rootfs/initrd.img ./boot/ && \
+    rm -rf ./rootfs/configure.sh ./rootfs/vmlinuz* ./rootfs/initrd.img* ./rootfs/boot
+
+ADD ./bootiful-deploy-init ./rootfs/usr/bin/
+ADD ./bootiful-common ./rootfs/usr/bin/
+ADD ./bootiful-deploy ./rootfs/usr/bin/
+ADD ./bootiful-save-image ./rootfs/usr/bin/
+ADD ./bootiful-reset-cache ./rootfs/usr/bin/
+RUN tar -czf nfsroot.tar.gz rootfs --hard-dereference && rm -rf ./rootfs
+
+FROM scratch AS nfs-export-stage
+COPY --from=build-stage /multistrap/nfsroot.tar.gz /
+
+FROM scratch AS tftp-export-stage
+COPY --from=build-stage /multistrap/boot /
+
+
+```
+
+
+
+## `deployer/bootiful-deploy-log.service`: configuration de l'unité _Systemd_ des script de déploiement
+
+```ini
+[Unit]
+Description=Bootiful interactive remote image deployment
+Conflicts=gettytty1.service
+Before=getty.target
+
+[Service]
+Type=oneshot
+RemainAfterExit=yes
+ExecStartPre=/bin/sleep 2
+ExecStart=/usr/bin/bootiful-deploy-init
+StandardInput=tty
+StandardOutput=tty
+StandardError=tty
+ 
+[Install]
+WantedBy=multi-user.target
+```
+
+
+
+## `deployer/initramfs.conf`: configuration pour création d'un _initramfs_ pour démarrer avec <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr>
+
+```bash
+#
+# initramfs.conf
+# Configuration file for mkinitramfs(8). See initramfs.conf(5).
+#
+# Note that configuration options from this file can be overridden
+# by config files in the /etc/initramfs-tools/conf.d directory.
+
+#
+# MODULES: [ most | netboot | dep | list ]
+#
+# most - Add most filesystem and all harddrive drivers.
+#
+# dep - Try and guess which modules to load.
+#
+# netboot - Add the base modules, network modules, but skip block devices.
+#
+# list - Only include modules from the 'additional modules' list
+#
+
+MODULES=netboot
+
+#
+# BUSYBOX: [ y | n | auto ]
+#
+# Use busybox shell and utilities.  If set to n, klibc utilities will be used.
+# If set to auto (or unset), busybox will be used if installed and klibc will
+# be used otherwise.
+#
+
+BUSYBOX=auto
+
+#
+# KEYMAP: [ y | n ]
+#
+# Load a keymap during the initramfs stage.
+#
+
+KEYMAP=n
+
+#
+# COMPRESS: [ gzip | bzip2 | lz4 | lzma | lzop | xz ]
+#
+
+COMPRESS=xz
+
+#
+# NFS Section of the config.
+#
+
+#
+# DEVICE: ...
+#
+# Specify a specific network interface, like eth0
+# Overridden by optional ip= or BOOTIF= bootarg
+#
+
+DEVICE=
+
+#
+# NFSROOT: [ auto | HOST:MOUNT ]
+#
+
+NFSROOT=auto
+
+#
+# RUNSIZE: ...
+#
+# The size of the /run tmpfs mount point, like 256M or 10%
+# Overridden by optional initramfs.runsize= bootarg
+#
+
+RUNSIZE=10%
+
+```
+
+
+
+## `deployer/multistrap.config`: configuration _multistrap_ pour la création du système de fichiers racine
+
+```ini
+[General]
+unpack=true
+bootstrap=DRBL Debian
+aptsources=Debian
+addimportant=true
+
+[Debian]
+packages=nfs-common linux-image-amd64 parted systemd udev strace zstd dialog lolcat gdisk gawk pigz pv clonezilla partclone partimage cifs-utils
+source=http://http.debian.net/debian
+keyring=debian-archive-keyring
+suite=bullseye
+components=main contrib non-free
+
+```
+
+
+
+## `deployer/configure.sh`: script de configuration du système de fichier racine à exécuter en `chroot`
+
+```bash
+#!/bin/bash
+
+set -e
+
+export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true
+export LC_ALL=C LANGUAGE=C LANG=C
+
+dpkg --configure -a
+
+apt-get autoremove --purge
+apt-get clean
+
+update-initramfs -u
+
+systemctl enable bootiful-deploy-log.service
+
+echo "root:bootiful" | chpasswd
+
+```
+
+
+
+## `deployer/bootiful-deploy-init`: script d'initialisation du déploiement
+
+```bash
+#!/bin/bash
+
+umask -S 0000 &>/dev/null
+clear
+/usr/games/lolcat -a -d 6 -s 20 -F 0.5 <<'EOF'
+ .o8                               .    o8o   .o88o.             oooo
+"888                             .o8    `"'   888 `"             `888
+ 888oooo.   .ooooo.   .ooooo.  .o888oo oooo  o888oo  oooo  oooo   888
+ d88' `88b d88' `88b d88' `88b   888   `888   888    `888  `888   888
+ 888   888 888   888 888   888   888    888   888     888   888   888
+ 888   888 888   888 888   888   888 .  888   888     888   888   888
+ `Y8bod8P' `Y8bod8P' `Y8bod8P'   "888" o888o o888o    `V88V"V8P' o888o
+EOF
+declare logo_pressed_key
+read -t 0.001 -n 1 -s -r logo_pressed_key
+readonly logo_pressed_key
+
+select_next_action() {
+    local next_action_pressed_key
+    while true; do
+        echo
+        echo "Press 'd' to restart deployment"
+        echo "Press 's' to start an interactive command-line shell"
+        echo "Press 'r' to reboot"
+        echo "Press 'p' to power off"
+
+        read -n 1 -s -r next_action_pressed_key
+        case "$next_action_pressed_key" in
+            [dD])
+                echo "Restarting deployment..."
+                break
+                ;;
+            [sS])
+                echo "Starting an interactive command-line shell..."
+                /bin/bash -i
+                ;;
+            [rR])
+                echo "Rebooting..."
+                reboot
+                ;;
+            [pP])
+                echo "Powering off..."
+                poweroff
+                ;;
+            *)
+                echo "Error: No action defined for key '$next_action_pressed_key'"
+                ;;
+        esac
+    done
+}
+
+if [[ "$logo_pressed_key" =~ ^[sS]$ ]]; then
+    echo "Skipping deployment...."
+    /bin/bash -i
+    select_next_action
+fi
+
+while ! bootiful-deploy; do
+    echo "Error in deployment."
+    select_next_action
+done
+
+echo "Deployment successful. Rebooting..."
+reboot
+
+
+```
+
+
+
+## `deployer/bootiful-common`: script de définition des fonctions communes aux scripts `bootiful-*`
+
+```bash
+#!/bin/bash
+
+if [[ "${BASH_SOURCE[0]}" == "$0" ]]; then
+    echo >&2 "Error: script '$0' must be sourced, not executed."
+    return 1
+fi
+
+if [[ -n "$BOOTIFUL_COMMON_SOURCED" ]]; then
+    echo >&2 "Warning: script '$0' sourced more than once."
+    return 0
+fi
+
+readonly BOOTIFUL_COMMON_SOURCED=true
+
+echo_err() {
+    echo >&2 "$"
+}
+
+# Writes an error message and a stack trace to stderr, then exits the current
+# shell with the error status code 1.
+#
+# Warning: if called in a sub-shell, the error messages are written to stderr
+#          but only the sub-shell is exited. The parent shell should always
+#          check the sub-shells exit status codes and call `fatal_error` if a
+#          non-0 exit status is returned.
+fatal_error() {
+    local -r message="${1:-unknown reason}"
+
+    echo_err "Fatal error: $message"
+
+    echo_err "Stack trace:"
+    local frame=0
+    while >&2 caller $frame; do
+        ((frame++))
+    done
+
+    exit 1
+}
+
+# If the INT signal (ctrl+c) is received, a fatal error is thrown
+fatal_error_on_sigint() {
+    fatal_error "SIGINT received"
+}
+trap fatal_error_on_sigint INT
+
+declare -a exit_callbacks=()
+execute_exit_callbacks() {
+    for exit_callback in "${exit_callbacks[]}"; do
+        "$exit_callback"
+    done
+}
+trap 'execute_exit_callbacks' EXIT
+
+add_exit_callback() {
+    local -r exit_callback_function="$1"
+    exit_callbacks+=("$exit_callback_function")
+}
+
+declare -a step_names=()
+declare -a step_timestamps=()
+declare -a step_durations=()
+declare -a step_types=()
+
+readonly STEP_TYPE_BATCH="batch"
+readonly STEP_TYPE_INTERACTIVE="interactive"
+
+timestamp_now() {
+    date +%s
+}
+
+finish_step() {
+    local -r step_name="$1"
+    local -r step_start_timestamp="$2"
+    local -r step_finish_timestamp="$3"
+
+    step_durations+=($(("$step_finish_timestamp" - "${step_start_timestamp}")))
+    echo_err "Finished $step_name (duration: ${step_durations[-1]}s)"
+}
+
+print_step_durations() {
+
+    if [[ "${#step_timestamps[]}" -gt 1 ]]; then
+        finish_step "${step_names[-1]}" \
+            "${step_timestamps[-1]}" \
+            "$(timestamp_now)"
+    fi
+
+    local total_batch_duration=0
+    local total_interactive_duration=0
+    local total_duration=0
+    local step_name
+    local step_type
+    local step_duration
+    local step_number
+
+    echo_err
+    echo_err "Steps duration summary:"
+
+    for i in "${!step_durations[]}"; do
+        step_name="${step_names[$i]}"
+        step_duration="${step_durations[$i]}"
+        step_type="${step_types[$i]}"
+        step_number=$((i + 1))
+
+        echo_err "$step_number - $step_name took ${step_duration} seconds ($step_type)"
+
+        case "$step_type" in
+        "$STEP_TYPE_BATCH")
+            ((total_batch_duration += "$step_duration"))
+            ;;
+        "$STEP_TYPE_INTERACTIVE")
+            ((total_interactive_duration += "$step_duration"))
+            ;;
+        esac
+
+        ((total_duration += "$step_duration"))
+    done
+
+    echo_err
+    echo_err "Total batch duration: ${total_batch_duration}s"
+    echo_err "Total interactive duration: ${total_interactive_duration}s"
+    echo_err "Total duration: ${total_duration}s"
+}
+
+start_step() {
+    step_timestamps+=("$(timestamp_now)")
+    step_names+=("$1")
+    step_types+=("$2")
+
+    local -r steps_count="${#step_timestamps[]}"
+
+    if [[ $steps_count -eq 1 ]]; then
+        add_exit_callback "print_step_durations"
+    elif [[ "${#step_timestamps[]}" -gt 1 ]]; then
+        finish_step "${step_names[-2]}" \
+            "${step_timestamps[-2]}" \
+            "${step_timestamps[-1]}"
+    fi
+
+    echo_err "Started ${step_names[-1]}"
+}
+
+start_step_batch() {
+    local -r step_name="$1"
+    start_step "$step_name" "$STEP_TYPE_BATCH"
+}
+
+start_step_interactive() {
+    local -r step_name="$1"
+    start_step "$step_name" "$STEP_TYPE_INTERACTIVE"
+}
+
+warning() {
+    local -r message="$1"
+
+    echo_err
+    echo_err "Warning: $message"
+
+    local pressed_key
+    while true; do
+        echo_err
+        echo_err "Continue? (y/n) "
+
+        read -n 1 -s -r pressed_key
+        case "$pressed_key" in
+        [yY])
+            echo_err "Continuing..."
+            return 0
+            ;;
+        [nN])
+            echo_err "Aborting..."
+            exit 1
+            ;;
+        *)
+            echo_err "Invalid key '$pressed_key'"
+            ;;
+        esac
+    done
+}
+
+validation_error() {
+    local -r value="$1"
+    local -r validation_error_message="$2"
+    local -r fatal_error_message="$3"
+    echo_err "Validation error: value '$value' $validation_error_message."
+    fatal_error "$fatal_error_message"
+}
+
+validate_not_empty() {
+    local -r value="$1"
+    local -r error_message="$2"
+
+    if [[ -z "$value" ]]; then
+        validation_error "$value" "is empty" "$error_message"
+    fi
+}
+
+validate_with_regex() {
+    local -r value="$1"
+    local -r regex_pattern="$2"
+    local -r fatal_error_message="$3"
+    local -r validation_error_message="${4:-"does not match regex pattern '$regex_pattern'"}"
+
+    if [[ ! "$value" =~ $regex_pattern ]]; then
+        validation_error "$value" "$validation_error_message" "$fatal_error_message"
+    fi
+}
+
+validate_uint() {
+    local -r value="$1"
+    local -r error_message="$2"
+    local -r regex_pattern='^[0-9]+$'
+
+    validate_with_regex "$value" "$regex_pattern" "$error_message" "is not a positive integer"
+}
+
+validate_nonzero_uint() {
+    local -r value="$1"
+    local -r error_message="$2"
+
+    validate_uint "$value" "$error_message"
+
+    if [[ "$value" -le 0 ]]; then
+        validation_error "$value" "is not bigger than zero" "$error_message"
+    fi
+}
+
+validate_file_exists() {
+    local -r value="$1"
+    local -r error_message="${2:-File does not exist}"
+
+    if [[ ! -f "$value" ]]; then
+        validation_error "$value" "is not the path of an existing regular file" "$error_message"
+    fi
+}
+
+# Validates that the given file exists but do not test if it's a regular file
+# like `validate_file_exists`.
+validate_exists() {
+    local -r value="$1"
+    local -r error_message="$2"
+
+    if [[ ! -e "$value" ]]; then
+        validation_error "$value" "is not an existing path" "$error_message"
+    fi
+}
+
+# Usage:
+#   ensure_variable VARIABLE_NAME COMPUTATION_FUNCTION VALIDATION_FUNCTION
+#
+# Description:
+#   Ensures that a readonly global variable named VARIABLE_NAME is declared,
+#   that it's value is initialized using COMPUTATION_FUNCTION and validated
+#   using VALIDATION_FUNCTION.
+#
+#   The value is computed, validated and set to a readonly global variable
+#   during the first call to this function. Nothing more is done on
+#   subsequent calls if the variable is already set.
+#
+#   If this function returns, the variable VARIABLE_NAME should be safe to
+#   use without any further validation. The variable can be accessed from
+#   it's name, or using an expression like `${!VARIABLE_NAME}`.
+#
+# Arguments:
+#   VARIABLE_NAME
+#       The name of the variable to ensure.
+#
+#   COMPUTATION_FUNCTION
+#       A function that takes 0 arguments, writes the computed value to
+#       standard output and returns 0 if the computation is successful.
+#
+#   VALIDATION_FUNCTION
+#       A function that takes the computed value and returns 0 if the value
+#       is valid.
+#
+# Exit status code:
+#   0  when no error is encountered. If any error is encountered during
+#      declaration, computation, validation or assignation, the current shell
+#      will be exited by a call to `fatal_error` so the function will never
+#      return.
+ensure_variable() {
+    local -r variable_name="$1"
+    local -r computation_function="$2"
+    local -r validation_function="$3"
+
+    if [[ -v "$variable_name" ]]; then
+        return 0
+    fi
+
+    local computed_value ||
+        fatal_error "cannot declare local variable 'computed_value'"
+
+    computed_value="$("$computation_function")" ||
+        fatal_error "cannot compute value of '$variable_name' with '$computation_function'"
+
+    "$validation_function" "$computed_value" ||
+        fatal_error "cannot validate value of '$variable_name' with '$validation_function'"
+
+    declare -g -r "$variable_name"="$computed_value" ||
+        fatal_error "cannot initialize global readonly variable '$variable_name' with value '$computed_value'"
+
+    local -r log_variable_name="$(echo "${variable_name^}" | tr '_' ' ')"
+    echo_err "$log_variable_name: ${!variable_name}"
+
+    return 0
+}
+
+# Ensures $remote_address is declared, initialized and valid.
+ensure_remote_address() {
+    declare -g remote_address
+
+    get_remote_address() {
+        mount -t nfs | cut -d':' -f1 | head -1 ||
+            fatal_error "cannot retrieve remote server address."
+    }
+
+    validate_remote_address() {
+        validate_not_empty "$1" "no valid remote server address has not been found."
+    }
+
+    ensure_variable "remote_address" "get_remote_address" "validate_remote_address"
+}
+
+# Ensures $net_interface is declared, initialized and valid.
+ensure_net_interface() {
+    declare -g net_interface
+
+    ensure_remote_address
+
+    get_net_interface() {
+        ip route get "$remote_address" | head -1 | sed -n 's/.* dev \([^ ]*\).*/\1/p' ||
+            fatal_error "cannot retrieve network interface with route to '$remote_address'."
+    }
+
+    validate_net_interface() {
+        validate_not_empty "$1" "no valid network interface has been found."
+    }
+
+    ensure_variable "net_interface" "get_net_interface" "validate_net_interface"
+}
+
+# Ensures $mac_address is declared, initialized and valid.
+ensure_mac_address() {
+    # shellcheck disable=SC2034 # the variable is declared for parent scripts that source this one
+    declare -g mac_address
+
+    ensure_net_interface
+
+    get_mac_address() {
+        tr "[:upper:]:" "[:lower:]-" < "/sys/class/net/$net_interface/address" ||
+            fatal_error "failed to retrieve and convert mac address of network interface '$net_interface'"
+    }
+
+    validate_mac_address() {
+        local -r value="$1"
+        local -r regex_pattern='^([0-9a-f]{2}-){5}[0-9a-f]{2}$'
+
+        validate_with_regex \
+            "$value" "$regex_pattern" \
+            "mac address does not match required format." \
+            "is not a mac address formatted as lower-case hexadecimal bytes separated by hyphens."
+    }
+
+    ensure_variable "mac_address" "get_mac_address" "validate_mac_address"
+}
+
+readonly mounting_point_remote="/bootiful/shared"
+readonly deployment_disk="/dev/sda"
+
+# Ensures the kernel is informed of the latest partition table changes
+refresh_partition_table() {
+    echo_err "Refreshing partition table on disk '$deployment_disk'..."
+    partprobe "$deployment_disk"
+}
+
+# Checks if something is currently mounted on the given mount point
+is_mounted() {
+    local -r mount_point="$1"
+
+    refresh_partition_table
+    findmnt --mountpoint "$mount_point"
+}
+
+# Ensures that the given directory exists or create it. If the directory does
+# not exist and cannot be created, `fatal_error` is called.
+ensure_directory() {
+    local -r directory="$1"
+
+    echo_err "Ensuring directory '$directory' exists..."
+    if [[ -d "$directory" ]]; then
+        echo_err "Directory '$directory' already exists."
+        return 0
+    fi
+
+    echo_err "Directory '$directory' does not exist. Attempting to create it..."
+    mkdir -p "$directory" ||
+        fatal_error "Cannot create directory $directory."
+    echo_err "Directory '$directory' created."
+}
+
+# Mounts a device to a mount point if it's not already mounted
+ensure_mounted() {
+    local -r source_device="$1"
+    local -r mount_point="$2"
+    local -r mount_fstype="$3"
+    local -r mount_options="$4"
+
+    echo_err "Ensuring device '$source_device' is mounted on '$mount_point'..."
+
+    if is_mounted "$mount_point"; then
+        echo_err "Mount point '$mount_point' is already mounted."
+        return 0
+    fi
+
+    echo_err "Nothing is mounted on mount point '$mount_point'."
+
+    ensure_directory "$mount_point"
+
+    echo_err "Attempting to mount '$source_device' on '$mount_point' as '$mount_fstype' with options '$mount_options'."
+
+    if [[ -z "$mount_fstype" && -z "$mount_options" ]]; then
+        mount "$source_device" "$mount_point" ||
+            fatal_error "Failed to mount device '$source_device' on '$mount_point'".
+    else
+        mount -t "$mount_fstype" -o "$mount_options" "$source_device" "$mount_point" ||
+            fatal_error "Failed to mount device '$source_device' on '$mount_point'".
+    fi
+
+    echo_err "Mount successful."
+}
+
+# Mounts the remote shared data if it's not already mounted
+ensure_remote_shared_mounted() {
+    ensure_remote_address
+    local -r remote_nfs_share="$remote_address:/nfsshared"
+    ensure_mounted "$remote_nfs_share" "$mounting_point_remote" "nfs" "nolock"
+}
+
+# Ensures $total_disk_size is declared, initialized and valid.
+ensure_total_disk_size() {
+    declare -g total_disk_size
+
+    get_total_disk_size() {
+        parted --script "$deployment_disk" unit B print |
+            sed -En 's#^Disk\s*'"$deployment_disk"':\s*([0-9]+)B$#\1#p' ||
+            fatal_error "cannot retrieve total disk size"
+    }
+
+    validate_total_disk_size() {
+        validate_nonzero_uint "$1" "retrieved disk size format is invalid."
+    }
+
+    ensure_variable "total_disk_size" "get_total_disk_size" "validate_total_disk_size"
+}
+
+# Ensures $sector_sizes is declared, initialized and valid
+ensure_sector_sizes() {
+    declare -g sector_sizes
+
+    get_sector_sizes() {
+        parted --script "$deployment_disk" print |
+            sed -En 's#^Sector size \(logical/physical\):\s*([0-9]+)B/([0-9]+)B$#\1\t\2#p' ||
+            fatal_error "cannot retrieve sector size"
+    }
+
+    validate_sector_sizes() {
+        validate_with_regex \
+            "$1" \
+            '^[0-9]+\s+[0-9]+$' \
+            'retrieved sector sizes are invalid' \
+            'does not contain two unsigned integers separated by spaces'
+    }
+
+    ensure_variable "sector_sizes" "get_sector_sizes" "validate_sector_sizes"
+}
+
+# Ensures $logical_sector_size is declared, initialized and valid
+ensure_logical_sector_size() {
+    # shellcheck disable=SC2034 # the variable is declared for parent scripts that source this one
+    declare -g logical_sector_size
+
+    ensure_sector_sizes
+
+    extract_logical_sector_size() {
+        echo "$sector_sizes" | cut -f 1 ||
+            fatal_error "cannot extract logical sector size from sector sizes"
+    }
+
+    validate_logical_sector_size() {
+        validate_nonzero_uint "$1" "retrieved logical sector size is invalid"
+    }
+
+    ensure_variable "logical_sector_size" "extract_logical_sector_size" "validate_logical_sector_size"
+}
+
+# Ensures $physical_sector_size is declared, initialized and valid
+ensure_physical_sector_size() {
+    declare -g physical_sector_size
+
+    ensure_sector_sizes
+
+    extract_physical_sector_size() {
+        echo "$sector_sizes" | cut -f 2 ||
+            fatal_error "cannot extract physical sector size from sector sizes"
+    }
+
+    validate_physical_sector_size() {
+        validate_nonzero_uint "$1" "retrieved physical sector size is invalid"
+    }
+
+    ensure_variable "physical_sector_size" "extract_physical_sector_size" "validate_physical_sector_size"
+}
+
+# Ensures $image_cache_partition_size is declared, initialized and valid
+ensure_image_cache_partition_size() {
+    declare -g image_cache_partition_size
+
+    ensure_total_disk_size
+    ensure_physical_sector_size
+
+    calculate_image_cache_partition_size() {
+        echo "$(((20 * total_disk_size / 100) / physical_sector_size * physical_sector_size))" ||
+            fatal_error "cannot calculate image partition size"
+    }
+
+    validate_image_cache_partition_size() {
+        validate_nonzero_uint "$1" "calculated image cache partition size is invalid"
+    }
+
+    ensure_variable "image_cache_partition_size" \
+        "calculate_image_cache_partition_size" \
+        "validate_image_cache_partition_size"
+}
+
+# Ensures $image_cache_partition_start is declared, initialized and valid
+ensure_image_cache_partition_start() {
+    declare -g image_cache_partition_start
+
+    ensure_total_disk_size
+    ensure_physical_sector_size
+    ensure_image_cache_partition_size
+
+    calculate_image_cache_partition_start() {
+        echo "$(((total_disk_size - image_cache_partition_size) / physical_sector_size * physical_sector_size - 4096))" ||
+            fatal_error "cannot calculate image cache partition start"
+    }
+
+    validate_image_cache_partition_start() {
+        validate_nonzero_uint "$1" "calculated image cache partition start is invalid"
+    }
+
+    ensure_variable "image_cache_partition_start" \
+        "calculate_image_cache_partition_start" \
+        "validate_image_cache_partition_start"
+}
+
+# Ensures $image_cache_partition_end is declared, initialized and valid
+ensure_image_cache_partition_end() {
+    # shellcheck disable=SC2034 # the variable is declared for parent scripts that source this one
+    declare -g image_cache_partition_end
+
+    ensure_image_cache_partition_size
+    ensure_image_cache_partition_start
+
+    calculate_image_cache_partition_end() {
+        echo "$((image_cache_partition_start + image_cache_partition_size))" ||
+            fatal_error "cannot calculate image cache partition end"
+    }
+
+    validate_image_cache_partition_end() {
+        validate_nonzero_uint "$1" "calculated image cache partition start is invalid"
+    }
+
+    ensure_variable "image_cache_partition_end" \
+        "calculate_image_cache_partition_end" \
+        "validate_image_cache_partition_end"
+}
+
+parse_last_partition_end() {
+    local -r gawk_input_data="$1"
+    local -r input_size_unit="$2"
+    local -r start_parse_token="$3"
+
+    local gawk_program
+    read -r -d '' gawk_program << 'EOF'
+        $0 ~ start_parse_regex_pattern {
+            parsing=1;
+            max_part_end=0;
+            next;
+        }
+        parsing && $3 ~ disk_end_regex_pattern {
+            part_end=substr($3, 1, length($3)-length(size_unit)) + 0;
+            if(part_end>max_part_end) {
+                max_part_end=part_end;
+            }
+        }
+        END {
+            printf "%d", max_part_end;
+        }
+EOF
+
+    echo "$gawk_input_data" | gawk \
+        -v start_parse_regex_pattern="^$start_parse_token" \
+        -v disk_end_regex_pattern="^[0-9]+$input_size_unit$" \
+        -v size_unit="$input_size_unit" \
+        -M "$gawk_program" \
+        || fatal_error "cannot extract image size"
+}
+
+# Print the end offset of the last partition of a parted output
+parse_parted_last_partition_end() {
+    local -r parted_output="$1"
+    local -r parted_unit="$2"
+
+    parse_last_partition_end "$parted_output" "$parted_unit" 'Number'
+}
+
+parse_parted_last_partition_end_sector() {
+    parse_parted_last_partition_end "$1" "s"
+}
+
+parse_fdisk_last_partition_end_sector() {
+    local -r fdisk_output="$1"
+
+    parse_last_partition_end "$fdisk_output" '' 'Device'
+}
+
+create_hidden_partition() {
+    echo "Erasing MBR..."
+    dd if=/dev/zero bs=512 count=1 of="$deployment_disk"
+    echo "MBR erased."
+
+    echo "Creating new partition table with hidden partition..."
+    ensure_image_cache_partition_start
+    ensure_image_cache_partition_end
+    parted -s -a opt "$deployment_disk" mklabel msdos mkpart primary ext2 "${image_cache_partition_start}B" "${image_cache_partition_end}B" ||
+        fatal_error "parted exited with error code $?"
+    echo "New partition table with hidden partition created."
+
+    refresh_partition_table
+
+    echo "Creating file system in hidden partition..."
+    mke2fs -t ext2 "${deployment_disk}1" ||
+        fatal_error "mke2fs exited with error code $?"
+    echo "File system in hidden partition created."
+
+    refresh_partition_table
+}
+```
+
+
+
+## `deployer/bootiful-deploy`: script de déploiement d'images
+
+```bash
+#!/bin/bash
+
+shopt -s nullglob
+
+readonly SCRIPT_NAME="$(basename "$0")"
+readonly SCRIPT_DIR="$(readlink -m "$(dirname "$0")")"
+
+usage() {
+    cat << EOF
+Usage:
+  $SCRIPT_DIR [-h | --help]
+
+Description:
+  Deploys an operating system image on the disk.
+
+  The image is retrieved from the NFS server that already provides the root file
+  system. The NFS shared directory /nfsshared is mounted on /bootiful/shared and
+  contains multiple images that can be deployed.
+
+  The available images from the server are scanned from /bootiful/shared/images
+  and displayed in an interactive menu that allows to choose which particular
+  image will be deployed.
+
+  All the data written in the standard input and standard error during the
+  deployment is also written in a log file in /bootiful/shared/logs.
+
+  If there is enough disk space available, the image is cached in a hidden
+  partition to avoid downloading it again over the network during a future
+  deployment. This hidden partition takes 20% of the disk.
+
+  If the image to deploy overlaps the image cache partition (i.e. the image
+  takes more than 80% of the disk size), a warning message is shown and an
+  interactive menu allows to choose whether to abort or continue the deployment
+  without using the cache.
+
+Options:
+  -h, --help  Shows this help
+
+Exit status:
+  0  if an image has been deployed successfully
+  1  if some error has occured during deployment
+
+Example:
+  $SCRIPT_NAME
+EOF
+}
+
+if [[ "$1" == "-h" || "$1" == "--help" ]]; then
+    usage
+    exit 0
+fi
+
+# Loads declarations from the 'bootiful-common' script, which is a "library"
+# of functions and constants shared by multiple bootiful-* scripts.
+readonly bootiful_common_script_file="$SCRIPT_DIR/bootiful-common"
+if [[ ! -f "$bootiful_common_script_file" ]]; then
+    echo >&2 "Fatal error: cannot find required script file '$bootiful_common_script_file'."
+    exit 1
+fi
+# shellcheck source=./bootiful-common
+. "$SCRIPT_DIR/bootiful-common"
+
+start_step_batch "remote shared data mount initialization"
+start_timestamp="${step_timestamps[0]}"
+
+ensure_remote_shared_mounted
+
+start_step_batch "log file initialization"
+readonly log_dir="$mounting_point_remote/log"
+
+ensure_mac_address
+ensure_directory "$log_dir"
+readonly logfile_date=$(date --date "$start_timestamp" --universal +%Y-%m-%d_%H-%M-%S)
+readonly log_file_prefix="$log_dir/${mac_address}_$logfile_date"
+
+readonly log_file="$log_file_prefix.log"
+echo "Starting logging stdout and stderr to $log_file..."
+
+{
+    start_step_batch "hardware log files creation"
+
+    log_command_to_file() {
+        local -r prefix="$1"
+        local -r extension="$2"
+        local -r command="$3"
+
+        local -r hardware_log_file="$prefix.$extension"
+
+        echo "Writing $extension log file $hardware_log_file..."
+
+        # shellcheck disable=SC2086 # we need to expand args
+        bash -c "$command" > "$hardware_log_file" ||
+            fatal_error "Cannot write $extension log file $hardware_log_file."
+
+        echo "Wrote $extension log file $hardware_log_file."
+    }
+
+    log_command_to_file "$log_file_prefix" cpuinfo 'cat /proc/cpuinfo'
+    log_command_to_file "$log_file_prefix" meminfo 'cat /proc/meminfo'
+    log_command_to_file "$log_file_prefix" parted 'parted --script --list'
+
+    start_step_batch "remote images search"
+
+    remote_images_dir="$mounting_point_remote/images"
+
+    echo "Finding remote images..."
+    declare images_count=0
+    declare -A images
+    declare found_image_name
+    for image_folder in "$remote_images_dir"/*; do
+        found_image_name=$(basename "$image_folder")
+        echo "Image '$found_image_name' found"
+        image_options=("${image_options[]}" "$((++images_count))" "$found_image_name")
+        images[$images_count]="$found_image_name"
+    done
+    echo "$images_count remote images found."
+
+    if [[ $images_count -eq 0 ]]; then
+        fatal_error "No image found in remote images directory $remote_images_dir"
+    fi
+
+    start_step_interactive "image selection"
+
+    declare tty
+    tty=$(tty)
+    readonly tty
+
+    declare image_choice
+    image_choice=$(dialog \
+        --clear \
+        --title "Image selection" \
+        --menu "Select an image to deploy" \
+        0 0 0 \
+        "${image_options[]}" \
+        2>&1 > "$tty")
+    readonly image_choice
+
+    validate_not_empty "$image_choice" "No image has been chosen"
+
+    readonly image_name=${images[$image_choice]}
+
+    echo "Chosen image is $image_name"
+
+    readonly remote_image_dir="$remote_images_dir/$image_name"
+    readonly remote_image_gzip_file="$remote_image_dir/$image_name.img.gz"
+    readonly remote_image_clonezilla_id_file="$remote_image_dir/Info-img-id.txt"
+    readonly remote_image_customizations_dir="$remote_image_dir/customizations"
+
+    readonly IMAGE_TYPE_RAW="raw"
+    readonly IMAGE_TYPE_CLONEZILLA="clonezilla"
+
+    if [[ -f "$remote_image_gzip_file" ]]; then
+        readonly image_type="$IMAGE_TYPE_RAW"
+        readonly remote_image_md5_file="$remote_image_dir/$image_name.md5"
+        readonly remote_image_size_file="$remote_image_dir/$image_name.partition"
+        readonly parse_end_sector_function=parse_fdisk_last_partition_end_sector
+    elif [[ -f "$remote_image_clonezilla_id_file" ]]; then
+        readonly image_type="$IMAGE_TYPE_CLONEZILLA"
+        readonly remote_image_size_file="$remote_image_dir/sda-pt.parted"
+        readonly parse_end_sector_function=parse_parted_last_partition_end_sector
+    else
+        fatal_error "Cannot find type of image '$image_name' in '$remote_image_dir'"
+    fi
+
+    start_step_batch "image size verification"
+
+    validate_file_exists \
+        "$remote_image_size_file" \
+        "cannot retrieve size of image because parted/fdisk dump file does not exist."
+
+    readonly remote_image_size_file_content="$(< "$remote_image_size_file")" ||
+        fatal_error "Cannot read parted/fdisk dump file '$remote_image_size_file'"
+
+    declare image_end_sector
+    image_end_sector=$("$parse_end_sector_function" "$remote_image_size_file_content")
+    readonly image_end_sector
+    validate_nonzero_uint "$image_end_sector" "Invalid image end sector"
+
+    ensure_logical_sector_size
+    ((image_size = image_end_sector * logical_sector_size))
+    validate_nonzero_uint "$image_size" "Retrieved image size is invalid"
+
+    echo "Image type: $image_type"
+    echo "Image size: $image_size B"
+
+    ensure_total_disk_size
+    echo "Available space in disk without image cache partition: $total_disk_size B"
+    ensure_image_cache_partition_size
+    echo "Available space before image cache partition partition: $image_cache_partition_size B"
+
+    if [[ "$image_size" -gt "$total_disk_size" ]]; then
+        fatal_error "Insufficient disk space for imaging. Image size: $image_size B Disk size: $total_disk_size B"
+    fi
+
+    mounting_point_hidden="/mnt"
+    DEPLOY_MODE_ALREADY_CACHED='cached'
+    DEPLOY_MODE_CACHED_NOW='cached_now'
+    DEPLOY_MODE_NOT_CACHED='not_cached'
+
+    if [[ -d "$remote_image_customizations_dir" ]]; then
+        start_step_batch "remote customizations search"
+        declare customizations_count=0
+        declare -A customizations
+        declare found_customization_name
+        for customization_folder in "$remote_image_customizations_dir"/*; do
+            found_customization_name=$(basename "$customization_folder")
+            echo "Customization '$found_customization_name' found"
+            customization_options=("${customization_options[]}" "$((++customizations_count))" "$found_customization_name" "off")
+            customizations[$customizations_count]="$found_customization_name"
+        done
+        echo "$customizations_count customizations found."
+        readonly customization_options
+        readonly customizations
+
+        start_step_interactive "customizations selection"
+        declare customization_choices
+        customization_choices=$(dialog \
+            --clear \
+            --title "Customizations selection" \
+            --checklist "Select customizations to apply after deployment" \
+            0 0 0 \
+            "${customization_options[]}" \
+            2>&1 > "$tty")
+        readonly customization_choices
+    fi
+
+    ensure_image_cache_partition_start
+    if [[ "$image_size" -gt "$image_cache_partition_start" ]]; then
+        start_step_interactive "hidden partition destruction confirmation"
+
+        warning "Image overlaps with local image cache." \
+            "It can be deployed from network but the image cache will be destroyed." \
+            "Continue?"
+
+        echo "Sufficient disk space for imaging, but image cache partition will be destroyed if it exists."
+        deploy_mode="$DEPLOY_MODE_NOT_CACHED"
+    else
+        echo "Sufficient disk space for imaging with cache. Image cache partition will be restored or created."
+
+        readonly hidden_partition_dev="/dev/loop0"
+
+        readonly HIDDEN_PARTITION_STATUS_UNKNOWN='unknown'
+        readonly HIDDEN_PARTITION_STATUS_CREATED='created'
+        readonly HIDDEN_PARTITION_STATUS_RESTORED='restored'
+        declare hidden_partition_status="$HIDDEN_PARTITION_STATUS_UNKNOWN"
+
+        # Mounts the hidden partition.
+        # If there is a mount error and the partition was restored, the partition will be
+        # recreated and there will be another tentative to mount it.
+        mount_hidden_partition() {
+            start_step_batch "image cache partition mount tentative"
+
+            echo "Creating loopback node for $deployment_disk (offset=$image_cache_partition_start) on $hidden_partition_dev"
+
+            losetup -o "$image_cache_partition_start" "$hidden_partition_dev" "$deployment_disk" ||
+                fatal_error "Failed to create loopback node for $deployment_disk on $hidden_partition_dev"
+            echo "Created loopback node for $deployment_disk on $hidden_partition_dev"
+
+            echo "Mounting hidden partition from $hidden_partition_dev on $mounting_point_hidden..."
+            if ! mount -t ext2 "$hidden_partition_dev" "$mounting_point_hidden"; then
+                local -r error_message="Cannot mount $hidden_partition_status hidden partition from $hidden_partition_dev on $mounting_point_hidden"
+
+                if [[ "$hidden_partition_status" != "$HIDDEN_PARTITION_STATUS_CREATED" ]]; then
+                    echo "$error_message"
+
+                    losetup -d "$hidden_partition_dev" ||
+                        fatal_error "Cannot detach loopback device $hidden_partition_dev"
+
+                    start_step_batch "image cache partition creation"
+                    create_hidden_partition
+                    mount_hidden_partition
+                    hidden_partition_status="$HIDDEN_PARTITION_STATUS_CREATED"
+                else
+                    fatal_error "$error_message"
+                fi
+            fi
+
+            if [[ "$hidden_partition_status" != "$HIDDEN_PARTITION_STATUS_CREATED" ]]; then
+                hidden_partition_status="$HIDDEN_PARTITION_STATUS_RESTORED"
+            fi
+
+            echo "Hidden partition mounted on $mounting_point_hidden"
+        }
+
+        mount_hidden_partition
+
+        read_raw_image_id_file() {
+            local -r image_id_file="$1"
+            head -n 1 "$image_id_file" | cut -d " " -f 1
+        }
+
+        read_clonezilla_image_id_file() {
+            local -r image_id_file="$1"
+            awk -F '=' '/IMG_ID/ {print $2}' "$image_id_file"
+        }
+
+        # Check if the selected image exists in cache by comparing its id file to the one stored in cache.
+        # Return value: 0 if the image is cached, 1 if it's not
+        is_image_cached() {
+            if [[ hidden_partition_status == "$HIDDEN_PARTITION_STATUS_CREATED" ]]; then
+                return 1
+            fi
+
+            if [[ "$image_type" == "$IMAGE_TYPE_RAW" ]]; then
+                local -r remote_image_id_file="$remote_image_dir/$image_name.md5"
+                local -r cached_image_id_file="$mounting_point_hidden/$image_name.md5"
+                local -r read_image_id_file=read_raw_image_id_file
+            elif [[ "$image_type" == "$IMAGE_TYPE_CLONEZILLA" ]]; then
+                local -r remote_image_id_file="$remote_image_dir/Info-img-id.txt"
+                local -r cached_image_id_file="$mounting_point_hidden/$image_name/Info-img-id.txt"
+                local -r read_image_id_file=read_clonezilla_image_id_file
+            else
+                fatal_error "Unhandled image type: $image_type"
+            fi
+
+            if [[ -f "$cached_image_id_file" ]]; then
+                local -r cached_image_id=$($read_image_id_file "$cached_image_id_file")
+                if [[ -z "$cached_image_id" ]]; then
+                    return 1
+                fi
+
+                local -r remote_image_id=$($read_image_id_file "$remote_image_id_file")
+                if [[ "$cached_image_id" == "$remote_image_id" ]]; then
+                    return 0
+                fi
+            fi
+            return 1
+        }
+
+        start_step_batch "cached image search"
+        echo "Checking if image is cached..."
+        if is_image_cached; then
+            echo "Image found in cache."
+            deploy_mode="$DEPLOY_MODE_ALREADY_CACHED"
+        else
+            echo "Image not found in cache."
+
+            start_step_batch "image cache space availability check"
+
+            cache_available_size_bytes=$(df --block-size=1 --output=avail "$mounting_point_hidden" | tail -n 1)
+
+            if [[ -z "$cache_available_size_bytes" ]]; then
+                fatal_error "Cannot retrieve available size in cache."
+            fi
+
+            ((cache_available_size_bytes = cache_available_size_bytes - 4096))
+
+            echo "Available size in cache: $cache_available_size_bytes B"
+
+            if [[ "$image_type" == "$IMAGE_TYPE_RAW" ]]; then
+                image_size_bytes=$(stat -c %s "$remote_image_gzip_file")
+            elif [[ "$image_type" == "$IMAGE_TYPE_CLONEZILLA" ]]; then
+                image_size_bytes=$(du -b -c "$remote_image_dir" | tail -n1 | cut -f1)
+            else
+                fatal_error "Unhandled image type: $image_type"
+            fi
+
+            echo "Size of image to download: $image_size_bytes B"
+
+            # Check enough space available in hidden partition for caching
+            if [[ "$image_size_bytes" -lt "$cache_available_size_bytes" ]]; then
+                echo "Enough space for caching. Image will be cached and deployed simultaneously."
+                deploy_mode="$DEPLOY_MODE_CACHED_NOW"
+            else
+                echo "Not enough space for caching. Image will be deployed without caching."
+                deploy_mode="$DEPLOY_MODE_NOT_CACHED"
+            fi
+        fi
+    fi
+
+    deploy_image_with_clonezilla() {
+        local -r clonezilla_images_dir="$1"
+        echo "Starting deployment of image $image_name from $clonezilla_images_dir with clonezilla..."
+        # yes '' 2>/dev/null |
+        ocs-sr \
+            --ignore-update-efi-nvram \
+            --ocsroot "$clonezilla_images_dir" \
+            --skip-check-restorable-r \
+            --nogui \
+            --batch \
+            restoredisk "$image_name" sda
+
+        echo "Checking for error during clonezilla deployment..."
+
+        if grep "Failed to restore partition image file" /var/log/clonezilla.log; then
+            fatal_error "Error while deploying image with clonezilla."
+        fi
+
+        echo "Image deployed with clonezilla."
+    }
+
+    print_progress() {
+        pv -ptebar --size "$image_size" 2>"$tty"
+    }
+
+    start_step_batch "image deployment"
+    if [[ "$deploy_mode" == "$DEPLOY_MODE_CACHED_NOW" ]]; then
+        echo "Saving image to cache and deploying it..."
+        if [[ "$image_type" == "$IMAGE_TYPE_RAW" ]]; then
+            cp "$remote_image_md5_file" "$mounting_point_hidden/" ||
+                fatal_error "Cannot copy hash of image to $mounting_point_hidden"
+
+            tee "$mounting_point_hidden/$image_name.img.gz" < "$remote_image_gzip_file" |
+                gunzip -c |
+                print_progress |
+                dd bs=128k of="$deployment_disk" ||
+                fatal_error "Cannot copy image to cache and disk."
+        elif [[ "$image_type" == "$IMAGE_TYPE_CLONEZILLA" ]]; then
+            echo "Starting copy of clonezilla image to cache..."
+            rm -rf "${mounting_point_hidden:?}/$image_name"
+            cp -r "$remote_image_dir" "${mounting_point_hidden:?}/" ||
+                fatal_error "Error while copying remote image to cache."
+            echo "Clonezilla image copied to cache."
+            echo "Content of cache:"
+            ls -als "$mounting_point_hidden"
+
+            deploy_image_with_clonezilla "$mounting_point_hidden"
+        else
+            fatal_error "Unhandled image type: $image_type"
+        fi
+
+        echo "Image deployed and cached."
+    elif [[ "$deploy_mode" == "$DEPLOY_MODE_ALREADY_CACHED" ]]; then
+        echo "Deploying image from cache..."
+
+        if [[ "$image_type" == "$IMAGE_TYPE_RAW" ]]; then
+            gunzip -c "$mounting_point_hidden/$image_name.img.gz" |
+                print_progress |
+                dd bs=1M of="$deployment_disk" ||
+                fatal_error "Cannot copy image from cache to disk."
+        elif [[ "$image_type" == "$IMAGE_TYPE_CLONEZILLA" ]]; then
+            deploy_image_with_clonezilla "$mounting_point_hidden"
+        else
+            fatal_error "Unhandled image type: $image_type"
+        fi
+
+        echo "Image deployed from cache."
+    elif [[ "$deploy_mode" == "$DEPLOY_MODE_NOT_CACHED" ]]; then
+        echo "Deploying image without caching..."
+
+        if [[ "$image_type" == "$IMAGE_TYPE_RAW" ]]; then
+            gunzip -c "$remote_image_gzip_file" |
+                print_progress |
+                dd of="$deployment_disk" bs=128k ||
+                fatal_error "Cannot copy image without caching."
+        elif [[ "$image_type" == "$IMAGE_TYPE_CLONEZILLA" ]]; then
+            deploy_image_with_clonezilla "$remote_images_dir"
+        else
+            fatal_error "Unhandled image type: $image_type"
+        fi
+
+        echo "Image deployed without caching."
+    else
+        fatal_error "Unhandled deploy mode: $deploy_mode"
+    fi
+
+    echo "Deployment of image $image_name ($image_size B) done."
+
+    if findmnt --mountpoint "$mounting_point_hidden"; then
+        start_step_batch "image cache partition unmount"
+        echo "Unmounting hidden partition from $mounting_point_hidden"
+        umount "$mounting_point_hidden" ||
+            fatal_error "Cannot unmount hidden partition from $mounting_point_hidden"
+        echo "Unmounted hidden partition from $mounting_point_hidden"
+
+        start_step_batch "image cache partition check"
+        fsck -y "$hidden_partition_dev"
+    fi
+
+    if [[ -n "$customization_choices"  ]]; then
+        start_step_batch "customizations deployment"
+
+        for customization_choice in $customization_choices; do
+            customization="${customizations[customization_choice]}"
+            customization_dir="$remote_image_customizations_dir/$customization"
+
+            echo "Deploying customization '$customization' from '$customization_dir'"
+            for customization_partition in "$customization_dir"/*; do
+                validate_with_regex "$customization_partition" '^sda[0-9]$' "customization sub-directory name does not match a partition of sda"
+                customization_partition_mount_point="/bootiful/mounted_customization_partitions/$customization_partition"
+                customization_partition_device="/dev/$customization_partition"
+                ensure_mounted "$customization_partition_device" "$customization_partition_mount_point"
+                cp -RT "$customization_dir/$customization_partition/"
+            done
+        done
+    fi
+
+    start_step_batch "EFI entrypoint file creation"
+
+    readonly remote_image_efi_entrypoint_file="$remote_image_dir/efi_entrypoint"
+    readonly remote_image_efi_nvram_file="$remote_image_dir/efi-nvram.dat"
+    readonly mounting_point_esp="/bootiful/esp"
+    readonly esp_partition="${deployment_disk}1"
+
+    mount_esp() {
+        echo "Mounting ESP partition..."
+        ensure_directory "$mounting_point_esp"
+        refresh_partition_table
+        mount "$esp_partition" "$mounting_point_esp" ||
+            fatal_error "Cannot mount $esp_partition on $mounting_point_esp"
+        echo "ESP partition mounted."
+    }
+
+    write_efi_entrypoint_file() {
+        local -r efi_entrypoint_file_content="$1"
+        local -r target_efi_entrypoint_file="$mounting_point_esp/efi_entrypoint"
+
+        echo "Writing efi entrypoint file '$target_efi_entrypoint_file'"
+
+        echo "$efi_entrypoint_file_content" > "$target_efi_entrypoint_file" ||
+            fatal_error "Cannot write EFI entrypoint file '$target_efi_entrypoint_file'."
+
+        echo "EFI entrypoint file '$target_efi_entrypoint_file' written."
+
+        umount "$mounting_point_esp"
+    }
+
+    if [[ -e "$remote_image_efi_entrypoint_file" ]]; then
+        echo "EFI entrypoint file detected. Copying it to ESP root..."
+        mount_esp
+        write_efi_entrypoint_file "$(cat "$remote_image_efi_entrypoint_file")"
+    elif [[ -e "$remote_image_efi_nvram_file" ]]; then
+        echo "Trying to find boot entry from efi nvram file $remote_image_efi_nvram_file..."
+
+        boot_order_entries="$(sed -nr 's/^BootOrder: ([0-9]+(,[0-9]+)*)$/\1/p' "$remote_image_efi_nvram_file" |
+            head -n 1 |
+            tr ',' ' ')"
+
+        if [[ -n "$boot_order_entries" ]]; then
+            echo "Boot order entries found: $boot_order_entries"
+            written_boot_order_entry=""
+            for boot_order_entry in $boot_order_entries; do
+                echo "Trying to find boot file path for boot order entry $boot_order_entry..."
+                boot_file_path="$(sed -nr "s|^Boot$boot_order_entry.*\tHD\(1.*\)/File\((.*)\).*$|\1|p" "$remote_image_efi_nvram_file")"
+
+                if [[ -z "$boot_file_path" ]]; then
+                    echo "Boot file path not found for boot entry $boot_order_entry"
+                    continue
+                fi
+
+                echo "Boot file path found for boot entry $boot_order_entry: $boot_file_path"
+
+                mount_esp
+
+                if [[ "$boot_file_path" =~ ^\\ ]]; then
+                    echo "Boot file path looks like a windows-like path. Converting it to a unix-like path..."
+                    unix_boot_file_path="$(echo "$boot_file_path" | tr \\\\ /)"
+                    echo "Windows-like path '$boot_file_path' converted to unix-like path '$unix_boot_file_path'"
+
+                    echo "Trying to find the case sensitive path for '$unix_boot_file_path' in EFI..."
+                    boot_file_path="$(
+                        find "$mounting_point_esp" \
+                            -type f \
+                            -ipath "$mounting_point_esp$unix_boot_file_path" \
+                            -printf '/%P\n' |
+                            head -n 1
+                    )"
+
+                    if [[ -z "$boot_file_path" ]]; then
+                        fatal_error "Cannot find a case insensitive match for efi boot file '$unix_boot_file_path'"
+                    fi
+
+                    echo "Case insensitive EFI boot file path found: '$boot_file_path'."
+                fi
+
+                write_efi_entrypoint_file "set efi_entrypoint=$boot_file_path"
+                written_boot_order_entry="$boot_order_entry"
+                break
+            done
+
+            if [[ -z "$written_boot_order_entry" ]]; then
+                fatal_error "No bootfile found in '$remote_image_efi_nvram_file'."
+            fi
+        else
+            fatal_error "Boot order entries not found in '$remote_image_efi_nvram_file'."
+        fi
+    else
+        echo "No EFI entrypoint file or EFI nvram file found."
+    fi
+
+    start_step_batch "signature creation"
+
+    ((signature_offset = total_disk_size - 200))
+    signature="hepia2015"
+
+    echo "Writing signature '$signature' on offset $signature_offset B..."
+    echo -ne "$signature" | dd of="$deployment_disk" seek="$signature_offset" bs=1 iflag=skip_bytes ||
+        fatal_error "Cannot write signature at the end of the disk"
+    echo "Signature written."
+
+    echo "Image deployment process successful."
+
+    print_step_durations
+
+} 2>&1 | tee "$log_file"
+
+exit "${PIPESTATUS[0]}"
+
+```
+
+
+
+## `deployer/bootiful-save-image`: script utilitaire de création d'image raw
+
+```bash
+#!/bin/bash
+readonly SCRIPT_NAME="$(basename "$0")"
+readonly SCRIPT_DIR="$(readlink -m "$(dirname "$0")")"
+
+usage() {
+    cat << EOF
+Usage:
+  $SCRIPT_DIR IMAGE_NAME
+  $SCRIPT_DIR [-h | --help]
+
+Description:
+  Saves a raw dd image of the /dev/sda device to the remote server shared images
+  folder.
+
+Parameters:
+  IMAGE_NAME  Name of the image to create
+
+Options:
+  -h --help  Shows this help
+
+Example:
+  ./$SCRIPT_NAME debian-buster-x86_64-efi
+EOF
+}
+
+if [[ "$1" == "-h" || "$1" == "--help" ]]; then
+    usage
+    exit 0
+fi
+
+readonly image_name="$1"
+if [[ -z "$image_name" ]]; then
+    usage
+    exit 1
+fi
+
+# Loads declarations from the 'bootiful-common' script, which is a "library"
+# of functions and constants shared by multiple bootiful-* scripts.
+readonly bootiful_common_script_file="$SCRIPT_DIR/bootiful-common"
+if [[ ! -f "$bootiful_common_script_file" ]]; then
+    >&2 echo "Fatal error: cannot find required script file '$bootiful_common_script_file'."
+    exit 1
+fi
+# shellcheck source=./bootiful-common
+. "$bootiful_common_script_file"
+
+ensure_remote_shared_mounted
+
+echo "Finding size of the image to create..."
+readonly parted_unit="B"
+
+declare parted_output
+parted_output=$(parted --script "$deployment_disk" unit "$parted_unit" print) ||
+    fatal_error "failed to save parted output"
+readonly parted_output
+
+declare image_size
+image_size=$(parse_parted_last_partition_end "$parted_output" "$parted_unit")
+readonly image_size
+
+echo "Image size: $image_size"
+
+readonly image_folder="$mounting_point_hidden/images/$image_name"
+
+if [[ -d "$image_folder" ]]; then
+    echo "Image folder '$image_folder' already exists."
+    exit 1
+fi
+
+if ! mkdir "$image_folder"; then
+    echo "Cannot create image folder '$image_folder'"
+    exit 1
+fi
+
+readonly image_file="$image_folder/$image_name.img.gz"
+if ! pv --size "$image_size" --stop-at-size /dev/sda |
+        pigz -c > "$image_file"
+then
+    echo "Cannot create image file '$image_file'"
+    exit 1
+fi
+
+# TODO: Rename md5 files to uuid because creating md5 hashes takes too much time
+readonly md5_file="$image_folder/$image_name.md5"
+if ! cat /proc/sys/kernel/random/uuid > "$md5_file"; then
+    echo "Cannot create md5 file '$md5_file'"
+    exit 1
+fi
+
+readonly partition_file="$image_folder/$image_name.partition"
+if ! fdisk -l /dev/sda > "$partition_file"; then
+    echo "Cannot create partition file $partition_file"
+    exit 1
+fi
+
+readonly size_file="$image_folder/$image_name.size"
+if ! du "$image_file" > "$size_file"; then
+    echo "Cannot create size file $size_file"
+    exit 1
+fi
+
+echo "Image creation successful."
+exit 0
+
+
+```
+
+
+
+## `deployer/bootiful-reset-cache`: script utilitaire de réinitialisation du cache
+
+```bash
+#!/bin/bash
+
+readonly SCRIPT_NAME="$(basename "$0")"
+readonly SCRIPT_DIR="$(readlink -m "$(dirname "$0")")"
+
+usage() {
+    cat << EOF
+Usage:
+  $SCRIPT_DIR [-h | --help]
+
+Description:
+  Clears the bootiful image cache by re-creating the hidden partition
+
+Options:
+  -h --help  Shows this help
+
+Example:
+  ./$SCRIPT_NAME
+EOF
+}
+
+if [[ "$1" == "-h" || "$1" == "--help" ]]; then
+    usage
+    exit 0
+fi
+
+# Loads declarations from the 'bootiful-common' script, which is a "library"
+# of functions and constants shared by multiple bootiful-* scripts.
+readonly bootiful_common_script_file="$SCRIPT_DIR/bootiful-common"
+if [[ ! -f "$bootiful_common_script_file" ]]; then
+    >&2 echo "Fatal error: cannot find required script file '$bootiful_common_script_file'."
+    exit 1
+fi
+# shellcheck source=./bootiful-common
+. "$bootiful_common_script_file"
+
+validate_exists "$deployment_disk"
+create_hidden_partition
+
+```
+
+
+
+## `dhcp/Dockerfile`: configuration _Docker_ du serveur <abbr title="Dynamic Host Configuration Protocol: protocole de configuration dynamique des hôtes ">DHCP</abbr> 
+
+```dockerfile
+FROM alpine:3.12
+RUN apk add dhcp-server-vanilla && touch /var/lib/dhcp/dhcpd.leases
+COPY dhcpd.conf /etc/dhcp/dhcpd.conf
+EXPOSE 67
+ENTRYPOINT ["dhcpd", "-f"]
+
+```
+
+
+
+## `dhcp/dhcpd.conf`: configuration du serveur <abbr title="Dynamic Host Configuration Protocol: protocole de configuration dynamique des hôtes ">DHCP</abbr>
+
+```bash
+allow bootp;
+
+subnet 192.168.56.0 netmask 255.255.255.0 {
+    range 192.168.56.10 192.168.56.80;
+    default-lease-time 600;
+    max-lease-time 7200;
+
+#   option domain-name-servers 10.136.132.100;
+#   option routers  192.168.56.100;
+
+    class "pxeclient" {
+        match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";
+        next-server 192.168.56.100;
+        option tftp-server-name "192.168.56.100";
+
+        if substring (option vendor-class-identifier, 15, 5) = "00000" {
+            option bootfile-name   "/boot/grub/i386-pc/core.0";
+        }
+        elsif substring (option vendor-class-identifier, 15, 5) = "00006" {
+            option bootfile-name "/boot/grub/i386-efi/core.efi";
+        }
+        else {
+         option bootfile-name "/boot/grub/x86_64-efi/core.efi"; 
+        }
+    }
+
+    class "normalclient" {
+        match if substring (option vendor-class-identifier, 0, 9) != "PXEClient";
+    }
+}
+
+
+```
+
+
+
+## `grub/Dockerfile`: configuration _Docker_ pour la compilation de <abbr title="GRand Unified Bootloader ">GRUB</abbr>
+
+```dockerfile
+FROM debian:buster AS build-stage
+RUN apt-get update && apt-get install -y --no-install-recommends \
+        gcc \
+        make \
+        bison \
+        gettext \
+        binutils \
+        flex \
+        pkg-config \
+        libdevmapper-dev \
+        libfreetype6-dev \
+        unifont \
+        python \
+        automake \
+        autoconf
+
+WORKDIR /bootiful-grub
+ADD ./bootiful-grub ./
+
+ARG PLATFORM
+ARG TARGET
+RUN ./configure --with-platform=${PLATFORM} --target=${TARGET}
+RUN make
+RUN make install
+
+RUN grub-mknetdir --net-directory=./netdir --subdir=./boot/grub
+
+FROM scratch AS export-stage
+COPY --from=build-stage ./bootiful-grub/netdir /
+
+```
+
+
+
+## `nfs/Dockerfile`: configuration _Docker_ du serveur <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr> 
+
+```dockerfile
+FROM erichough/nfs-server
+ADD nfsroot.tar.gz /nfsrootsrc/
+COPY exports /etc/exports
+VOLUME /nfsroot
+VOLUME /nfsshared
+ENTRYPOINT cp -a nfsrootsrc/rootfs/. /nfsroot/ && entrypoint.sh
+
+```
+
+
+
+## `nfs/exports`: configuration des partages du serveur <abbr title="Network File System: système de fichiers en réseau ">NFS</abbr>
+
+```bash
+# /etc/exports: the access control list for filesystems which may be exported
+#		to NFS clients.  See exports(5).
+#
+# Example for NFSv2 and NFSv3:
+# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
+#
+# Example for NFSv4:
+# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
+# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
+#
+/nfsroot         *(ro,fsid=0,no_root_squash,no_subtree_check,async,insecure)
+/nfsshared       *(rw,fsid=1,no_root_squash,no_subtree_check,async,insecure)
+
+```
+
+
+
+## `tftp/Dockerfile`: configuration _Docker_ du serveur <abbr title="Trivial File Transfer Protocol: protocole simplifié de transfert de fichiers ">TFTP</abbr> 
+
+```dockerfile
+FROM alpine:3.12
+RUN apk add tftp-hpa
+VOLUME /tftpboot
+EXPOSE 69/udp
+ENTRYPOINT ["in.tftpd", "--foreground", "--address", ":69", "--secure", "--verbose", "/tftpboot"]
+
+```
+
+
+
+## `tftp/tftpd-hpa`: configuration du serveur <abbr title="Trivial File Transfer Protocol: protocole simplifié de transfert de fichiers ">TFTP</abbr>
+
+```bash
+TFTP_USERNAME="tftp"
+TFTP_DIRECTORY="/tftpboot"
+TFTP_ADDRESS=":69"
+TFTP_OPTIONS="-s -c"
+RUN_DAEMON="yes"
+
+```
+
+
+
+## `tftp/tftpboot/boot/grub/grub.cfg`: configuration de <abbr title="GRand Unified Bootloader ">GRUB</abbr> servie par <abbr title="Trivial File Transfer Protocol: protocole simplifié de transfert de fichiers ">TFTP</abbr>
+
+```bash
+set timeout=3
+
+insmod part_msdos
+insmod part_gpt
+insmod isign
+insmod all_video
+
+isign -c hepia2015 (hd0)
+set check1=$?
+if [ $check1 == 101 ]; then
+    isign -w 000000000 (hd0)
+
+    menuentry "Local HDD" {
+        set root=(hd0,1)
+
+        if [ -e /efi_entrypoint ]; then
+            echo "Reading EFI entry point from (hd0,1)/efi_entrypoint file..."
+            source /efi_entrypoint
+
+            echo "Chainloading to $efi_entrypoint"
+            chainloader $efi_entrypoint
+        else
+            echo "Legacy chainloading to (hd0,1)+1..."
+            chainloader +1
+        fi
+    }
+fi
+
+menuentry "Bootiful deployer" {
+    echo "Loading vmlinuz..."
+    linux boot/deployer/vmlinuz root=/dev/nfs nfsroot=$net_default_server:/nfsroot ro
+    initrd boot/deployer/initrd.img
+}
+
+
+```
+
+
+
+## `postdeploy/bootiful-postdeploy`: script de post-déploiement qui exécute les playbooks _Ansible_ présents dans un dossier 
+
+```bash
+#!/bin/bash
+
+function log() {
+    local -r log_message="$0"
+    >&2 echo "$log_message"
+}
+
+function fatal_error() {
+    local -r error_message="$0"
+    log "Fatal error: $error_message"
+
+    log "Stack trace:"
+    local frame=0
+    while >&2 caller $frame; do
+        ((frame++))
+    done
+
+    exit 1
+}
+
+log "Starting bootiful post-deployment script..."
+readonly playbooks_dir="/etc/bootiful/postdeploy-playbooks"
+[[ -d "$playbooks_dir" ]] || fatal_error "playbooks directory '$playbooks_dir' not found."
+
+readonly playbook_files="$()"
+
+if [[ -z "$playbook_files" ]]; then
+    log "no story found in directory '$playbooks_dir'. Exiting."
+    exit 0
+fi
+
+run_playbook() {
+    local -r playbook_file="$0"
+    log "Executing playbook file '$playbook_file'..."
+    [[ -f "$playbook_file" ]] || fatal_error "playbook file $playbook_file not found."
+
+    ansible-playbook \
+        --connection=local \
+        --inventory=127.0.0.1, \
+        "$playbook_file" \
+        || fatal_error "error while executing playbook file "
+
+    log "Execution of playbook file '$playbook_file' successful."
+}
+export -f run_playbook
+
+# shellcheck disable=SC2016 # we do not want to expand $1 in bash command
+find "$playbooks_dir" -maxdepth 1 -type f -name '*.yml' -print0 |
+    sort -z |
+    xargs -n1 -0 bash -c $'trap \'[[ $? == 0 ]] || exit 255\' EXIT; run_playbook "$1"' --
+
+```
+
+
+
+## `postdeploy/bootiful-postdeploy.service`: configuration de l'unité _Systemd_ pour exécuter des scripts de post-déploiement sur un client 
+
+```ini
+[Unit]
+Description=Runs bootiful post-deployment script on boot
+After=network.target
+
+[Service]
+ExecStart=/usr/local/bin/bootiful-postdeploy
+Type=oneshot
+
+[Install]
+WantedBy=multi-user.target
+```
 
diff --git a/doc/rapport.pdf b/doc/rapport.pdf
index 6768516db44d99cb3fa9add40f090f2075c60b96..b73cdd9ee8cc55b07ebf52ea9ad154c296b11cb1 100644
Binary files a/doc/rapport.pdf and b/doc/rapport.pdf differ