Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
L
Lab-Ansible
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Deploy
Releases
Model registry
Analyze
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
LSDS
Teaching
Bachelor
Cloud-and-Deployment
Lab-Ansible
Commits
f23e78fd
Unverified
Commit
f23e78fd
authored
5 months ago
by
Marco Emilio "sphakka" Poleggi
Browse files
Options
Downloads
Patches
Plain Diff
FIxed some typos
Signed-off-by:
Marco Emilio "sphakka" Poleggi
<
marcoep@ieee.org
>
parent
d9da524f
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+18
-18
18 additions, 18 deletions
README.md
with
18 additions
and
18 deletions
README.md
+
18
−
18
View file @
f23e78fd
...
...
@@ -42,7 +42,7 @@ lcl$ ansible --version
1.
Create a VM instance with the following characteristics:
-
OS: any GNU/Linux distribution using the
`apt`
package manager. Tested
on Debian 11
(Bullseye) and Ubuntu Server 20.04 LTS
on Debian 11
and 12
-
type: the smallest capable of running the above OS. 1 core, 1GB RAM,
10GB virtual disk should be enough.
-
security group/policy: the one you created above
...
...
@@ -239,11 +239,11 @@ Now, open the playbook file `playbooks/web.yml`. It should look like:
```
:bulb: Notice the following aspects:
* The target `hosts`
: it
can be a group.
* The target `hosts`
key
can be
assigned
a group.
* The `became` key instructs Ansible to run as the *superuser* (normally
`root`) in the managed hosts.
* Each task uses a different *builtin module*. Can you name them?
* Builtin modules have self-explanatory:
* Builtin modules have self-explanatory
options
:
* some of them are system-agnostic, e.g., the `copy` one. :question: Which other?
* others depend on the target OS. :question: Which ones?
* We can use `tags` to select specific tasks. E.g., with:
...
...
@@ -338,7 +338,7 @@ If everything goes well, the output should confirm a task recap of: ok=6,
changed=5.
Point your Web browser to the address of the managed server (mind that we are
not
using
SSL
): `http://<VM-DNS-name-or-IP-address>`. You should see the
using
plain `http`
): `http://<VM-DNS-name-or-IP-address>`. You should see the
homepage showing "nginx, configured by Ansible".
...
...
@@ -376,7 +376,7 @@ desired state as *ok*.
1. Do something more drastic like completely removing the homepage
`index.html` and repeat the previous question.
1. :question: What happened this time?
1. Notwithstanding the idempotence principle, there's task which is always
1. Notwithstanding the idempotence principle, there's
a
task which is always
marked as "changed".
1. :question: Which one? Do you have an explanation?
...
...
@@ -429,7 +429,7 @@ Here's an example:
- name: baz
...
# triggered by signal 'a_signal'
listen: signal
listen:
a_
signal
```
...
...
@@ -512,7 +512,7 @@ which makes Ansible ignore it unless explicitly requested:
Then, you'd call it like this:
```
shell
lcl$ ansible-playbook [--check] YOUR_PLAYBOOK --tags='revert
all'
lcl$ ansible-playbook [--check] YOUR_PLAYBOOK --tags='revert
_
all'
```
:bulb: Notice how, in normal conditions, never-tagged tasks are not shown in
...
...
@@ -573,11 +573,11 @@ restart" dummy task that triggers the handler "restart nginx" on demand.
Suppose you have an out-of-tree (i.e., not available via the target host
package manager) complex application composed of several modules which must
be, first, installed, then, deployed in the correct order.
The correct
approach is
to separate installation tasks from deployment handlers, and we
must ensure
redeployment of dependent modules whenever a dependency has
changed. A
possible solution is using *cascading* handlers: indeed, a handler
can notify
another one!
be, first, installed, then, deployed in the correct order.
A good approach is
to separate installation tasks from deployment handlers, and we
must ensure
redeployment of dependent modules whenever a dependency has
changed. A
possible solution is using *cascading* handlers: indeed, a handler
can notify
another one!
For the sake of simplicity, let's prepare a dummy 2-module applications:
...
...
@@ -602,7 +602,7 @@ some rescue operations:
copy:
src: files/app2
dest: /tmp/app2
notify: deploy app2
notify: deploy app2
# Run as the default user
become: false
...
...
@@ -627,9 +627,9 @@ handler must notify the second one. Let's add the above snippet to our
`playbooks/web.yml` and try it.
:bulb: Notice the `debug` feature above. Also, to see the echo messages you
have to use the `-
vv
` switch.
have to use the `-
-verbose
` switch.
At the first installation, we should get both installation task changed
At the first installation, we should get both installation task
s
changed
**and** both handlers triggered, which makes 4 changes:
```
shell
lcl$ ansible-playbook -v playbooks/web.yml
...
...
@@ -689,13 +689,13 @@ service.
:hammer_and_wrench: **Over to you now.** Prepare another playbook based on the
`playbooks/kind-metallb.yml` boilerplate.
:bulb: To go as quick as possible, provision a 2-node
s
KinD cluster with 2
:bulb: To go as quick as possible, provision a 2-node KinD cluster with 2
http-echo pods on the only worker node.
Once done, use the following test track to finalize your workflow:
1. The first playbook run
must
install, provision and deploy everything. The
1. The first playbook run
shall
install, provision and deploy everything. The
http-echo app shall respond with different messages.
1. Changing any configuration file
must
trigger a cascading handler series. E.g.:
1. Changing any configuration file
shall
trigger a cascading handler series. E.g.:
- KinD configuration triggers all operations (apart from package installations).
- `metallb-native.yaml` triggers a whole app stack redeployment. :bulb:
`kubectl apply` can be called on a running stack.
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment