Skip to content
Snippets Groups Projects
Unverified Commit d9da524f authored by Marco Emilio "sphakka" Poleggi's avatar Marco Emilio "sphakka" Poleggi
Browse files

Reviewed for 2024-2025

parent 7e730fce
No related branches found
No related tags found
No related merge requests found
...@@ -59,10 +59,24 @@ lcl$ ssh -i <your-private-key> <your-vm-user>@<VM-DNS-name-or-IP-address> ...@@ -59,10 +59,24 @@ lcl$ ssh -i <your-private-key> <your-vm-user>@<VM-DNS-name-or-IP-address>
**Goal:** instruct Ansible about the machines (hosts) it shall manage. **Goal:** instruct Ansible about the machines (hosts) it shall manage.
Create a "sandbox" directory on your local machine `~/ansible/`. Inside Clone or, if you want to track your progress, fork and clone this GIT repo.
it, create a file called `hosts.yml` which will serve as the *inventory* file
(a.k.a. hostfile; it can also be written in `.ini` style), and add the It's not advised to run Ansible inside your repo clone because sensitive
following: information like SSH keys might get committed in Git. Thus, create a "sandbox"
directory on your local machine:
``` shell
lcl$ mkdir -p ~/ansible/
lcl$ cd ~/ansible/
```
Copy your repo clone's content over to your sandbox (you might need to install `rsync`):
``` shell
lcl$ rsync -Cahv YOUR_REPOS/lab-ansible/ .
...
```
Open the file `hosts.yml` which is your *inventory*
(a.k.a. hostfile; it can also be written in `.ini` style). The contents look
like:
``` yaml ``` yaml
all: all:
...@@ -73,32 +87,30 @@ all: ...@@ -73,32 +87,30 @@ all:
ansible_ssh_private_key_file: <your-private-key> ansible_ssh_private_key_file: <your-private-key>
``` ```
and check its validity: You see? There's only one host named "testserver".
Complete the inventory with the missing data and check its validity:
``` shell ``` shell
lcl$ ansible-inventory -i ~/ansible/hosts.yml --list lcl$ ansible-inventory --list
``` ```
Verify that you can use Ansible to connect to the testserver: Verify that you can use Ansible to connect to the testserver:
``` shell ``` shell
lcl$ ansible -i ~/ansible/hosts.yml testserver -m ping lcl$ ansible testserver -m ping
``` ```
You should see output similar to the following: You should get a "pong" in response (ignore the rest):
``` ```
testserver | SUCCESS => { testserver | SUCCESS => {
"ansible_facts": { ...
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong" "ping": "pong"
} }
``` ```
Let's simplify the configuration of Ansible by using a user default Let's simplify the configuration of Ansible by using a user default
configuration file `~/.ansible.cfg` with contents (`.ini` style): configuration file, either `~/.ansible.cfg` or ` ~/ansible/ansible.cfg`, with
contents (`.ini` style):
``` ini ``` ini
[defaults] [defaults]
...@@ -110,12 +122,12 @@ deprecation_warnings = false ...@@ -110,12 +122,12 @@ deprecation_warnings = false
``` ```
Notice that SSH's host key checking is disabled for convenience, as the Notice that SSH's host key checking is disabled for convenience, as the
managed VM will get a new IP address each time it is recreated. **For managed VM will probably get a new IP address each time it is recreated. **For
production systems this is a security risk!** [See the production systems this is a security risk!** [See the
doc](https://docs.ansible.com/ansible/latest/reference_appendices/config.html#the-configuration-file) doc](https://docs.ansible.com/ansible/latest/reference_appendices/config.html#the-configuration-file)
for the other details. for the other details.
With the above default settings our inventory file now simplifies to: Now we can simplifies our inventory file:
``` yaml ``` yaml
all: all:
...@@ -124,35 +136,30 @@ all: ...@@ -124,35 +136,30 @@ all:
ansible_ssh_host: <VM-DNS-name-or-IP-address> ansible_ssh_host: <VM-DNS-name-or-IP-address>
``` ```
Now calling ansible is simpler: The ansible `-m command` option can be used to run arbitrary commands on the managed
hosts. F.i., to print the SSH user's name:
``` shell ``` shell
lcl$ ansible testserver -m ping lcl$ ansible testserver -m command -a whoami
``` ...
debian
The ansible command can be used to run arbitrary commands on the remote
machines. F.i., to execute the uptime command:
``` shell
lcl$ ansible testserver -m command -a uptime
testserver | CHANGED | rc=0 >>
09:05:10 up 54 min, 2 users, load average: 0.01, 0.00, 0.00
``` ```
### Task #4: Install web application ### ### Task #4: Install a web application ###
**Goal:** configure the managed host to run a simple Web application served by **Goal:** configure the managed host to run a simple Web application served by
the nginx server. This necessitates four files: the nginx server.
1. The inventory file `~/ansible/hosts.yml` as written before. We need four files:
2. A "playbook" that specifies what to configure
`~/ansible/playbooks/web.yml`.
3. The nginx's configuration file `~/ansible/playbooks/files/nginx.conf`.
4. A home page template for our Web site
`~/ansible/playbooks/templates/index.html.j2`.
To make our playbook more generic, we will use an ansible group called 1. The inventory file `hosts.yml` as written before.
2. A "playbook" that specifies what to obtain `playbooks/web.yml`.
3. The nginx's configuration file `playbooks/files/nginx.conf`.
4. A home page (Jinja template) for our Web site
`playbooks/templates/index.html.j2`.
To make our playbook more generic, we will use a host group called
`webservers` holding our managed server, so that we can then later easily add `webservers` holding our managed server, so that we can then later easily add
more servers which Ansible will configure identically. Modify the hostfile by more servers which Ansible will configure identically. Modify the hostfile by
adding a definition of the group webservers, this is **left to you as an adding a definition of the group webservers, this is **left to you as an
...@@ -191,7 +198,7 @@ lcl$ ansible webservers -m ping ...@@ -191,7 +198,7 @@ lcl$ ansible webservers -m ping
The output should be the same as before. The output should be the same as before.
Now, create a playbook named `~/ansible/playbooks/web.yml` with the following content: Now, open the playbook file `playbooks/web.yml`. It should look like:
``` yaml ``` yaml
--- ---
...@@ -200,60 +207,117 @@ Now, create a playbook named `~/ansible/playbooks/web.yml` with the following co ...@@ -200,60 +207,117 @@ Now, create a playbook named `~/ansible/playbooks/web.yml` with the following co
become: True become: True
tasks: tasks:
- name: install nginx - name: install nginx
apt: name=nginx update_cache=yes apt:
update_cache: yes
pkg: nginx
tags: install
- name: copy nginx config file - name: copy nginx config file
copy: src=files/nginx.conf dest=/etc/nginx/sites-available/default copy:
src: files/nginx.conf
dest: /etc/nginx/sites-available/default
tags: config
- name: enable configuration - name: enable configuration
file: > file:
dest=/etc/nginx/sites-enabled/default src: /etc/nginx/sites-available/default
src=/etc/nginx/sites-available/default dest: /etc/nginx/sites-enabled/default
state=link state: link
tags: config
- name: copy index.html - name: copy index.html
template: src=templates/index.html.j2 dest=/usr/share/nginx/html/index.html mode=0644 template:
src: templates/index.html.j2
dest: /usr/share/nginx/html/index.html
mode: 0644
tags: deploy_app
- name: restart nginx - name: restart nginx
service: name=nginx state=restarted service:
name: nginx
state: restarted
``` ```
:question: How many tasks are declared in the above playbook? :bulb: Notice the following aspects:
* The target `hosts`: it can be a group.
* The `became` key instructs Ansible to run as the *superuser* (normally
`root`) in the managed hosts.
* Each task uses a different *builtin module*. Can you name them?
* Builtin modules have self-explanatory:
* some of them are system-agnostic, e.g., the `copy` one. :question: Which other?
* others depend on the target OS. :question: Which ones?
* We can use `tags` to select specific tasks. E.g., with:
Then, create the nginx's configuration file ``` shell
`~/ansible/playbooks/files/nginx.conf` referenced in the playbook references, lcl$ ansible-playbook playbooks/web.yml --tags='config'
which the following content: ```
``` nginx Check the validity of your playbook:
server { ``` shell
listen 80 default_server; lcl$ ansible-playbook --syntax-check playbooks/web.yml
listen [::]:80 default_server ipv6only=on; ```
root /usr/share/nginx/html; Check ("dry run") what would be done (something should *fail*):
index index.html index.htm; ``` shell
lcl$ ansible-playbook --syntax-check playbooks/web.yml
server_name localhost; PLAY [Configure webserver with nginx] *****************
location / { TASK [Gathering Facts] ********************************
try_files $uri $uri/ =404; ok: [testserver]
}
} TASK [install nginx] **********************************
changed: [testserver]
TASK [copy nginx config file] *************************
changed: [testserver]
TASK [enable configuration] ***************************
fatal: [testserver]: FAILED! => {"changed": false, "msg": "src file does not exist ..."}
PLAY RECAP ********************************************
testserver: ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
``` ```
As per the above configuration file, nginx will serve the Web app's homepage :bulb: Notice the first task executed by default: "Gathering Facts". That's
from `index.html`. This will be generated via Ansible's templating engine from where Ansible queries the target infrastructure to collect useful information
a template, which has to be created as in order to establish the infrastructure's *state*.
`~/ansible/playbooks/templates/index.html.j2` and shall hold the following:
Indeed, the task "enable configuration" is expected to fail because the
presence of the remote source file depends on the previous task which was
simulated. That also causes the workflow to stop prematurely.
However, we can ignore errors in dry mode, just modify the failing
task with:
``` yaml
- name: enable configuration
ignore_errors: "{{ ansible_check_mode }}"
...
```
:bulb: Notice the special syntax `{{ ... }}`: that's the [Jinja2
templating](https://jinja.palletsprojects.com/en/stable/) way of substituting
variables. The internal variable `ansible_check_mode` provides the right
condition to the task's `ignore_errors` policy.
Now, try again the `--check` and you'll see that the workflow completes the 6
tasks with the failing one marked as *skipped*.
Then, have a look at nginx's configuration file `playbooks/files/nginx.conf`
referenced in the playbook. It contains the declaration of the Web app's homepage:
``` nginx
server {
...
index index.html index.htm;
...
}
```
The file `index.html`will be rendered by Ansible's templating engine from the
Jinja2 template `playbooks/templates/index.html.j2`, which uses some internal
Ansible variables:
``` html ``` html
<html> ...
<head>
<title>Welcome to Ansible</title> </head>
<body>
<h1>nginx, configured by Ansible</h1>
<h2>instance: {{ ansible_hostname }}</h2>
<p>If you can see this, Ansible successfully installed nginx.</p>
<p>{{ ansible_managed }}</p>
<p>Some facts Ansible gathered about this machine: <p>Some facts Ansible gathered about this machine:
<table> <table>
<tr><td>OS family:</td><td>{{ ansible_os_family }}</td></tr> <tr><td>OS family:</td><td>{{ ansible_os_family }}</td></tr>
...@@ -261,43 +325,17 @@ a template, which has to be created as ...@@ -261,43 +325,17 @@ a template, which has to be created as
<tr><td>Distribution version:</td><td>{{ ansible_distribution_version }}</td></tr> <tr><td>Distribution version:</td><td>{{ ansible_distribution_version }}</td></tr>
</table> </table>
</p> </p>
</body> ...
</html>
``` ```
Those variables above will be instantiated during the "Gathering Facts" stage.
Now, run the newly created playbook to install and configure nginx, and to Now, run the newly created playbook:
deploy the Web app on the managed host:
``` shell ``` shell
lcl$ ansible-playbook ~/ansible/playbooks/web.yml lcl$ ansible-playbook playbooks/web.yml
``` ```
If everything goes well, the last output lines should be like: If everything goes well, the output should confirm a task recap of: ok=6,
changed=5.
PLAY [Configure webserver with nginx] **************************************
TASK [Gathering Facts] *****************************************************
ok: [testserver]
TASK [install nginx] *******************************************************
changed: [testserver]
TASK [copy nginx config file] ***********************************************
changed: [testserver]
TASK [enable configuration] *************************************************
ok: [testserver]
TASK [copy index.html] ******************************************************
changed: [testserver]
TASK [restart nginx] ********************************************************
changed: [testserver]
PLAY RECAP ******************************************************************
testserver : ok=6 changed=4 unreachable=0 failed=0
Point your Web browser to the address of the managed server (mind that we are Point your Web browser to the address of the managed server (mind that we are
not using SSL): `http://<VM-DNS-name-or-IP-address>`. You should see the not using SSL): `http://<VM-DNS-name-or-IP-address>`. You should see the
...@@ -312,7 +350,9 @@ State Configuration. ...@@ -312,7 +350,9 @@ State Configuration.
According to this principle, before doing anything, Ansible should establish According to this principle, before doing anything, Ansible should establish
the current state of the managed server, compare it to the desired state the current state of the managed server, compare it to the desired state
expressed in the playbook, and then only perform the actions necessary to expressed in the playbook, and then only perform the actions necessary to
bring the current state to the desired state. In other words, if the managed system is already in its desired state, nothing will be done (apart from some notable exception -- see below): that's called "idempotence". bring the current state to the desired state. In other words, if the managed
system is already in its desired state, nothing will be done (apart from some
notable exception -- see below): that's called "idempotence".
In its ouput, Ansible marks tasks where it had to perform some action as In its ouput, Ansible marks tasks where it had to perform some action as
*changed* whereas tasks where the actual state already corresponded to the *changed* whereas tasks where the actual state already corresponded to the
...@@ -334,20 +374,24 @@ desired state as *ok*. ...@@ -334,20 +374,24 @@ desired state as *ok*.
1. :question: What does Ansible do to the file and what does it show in 1. :question: What does Ansible do to the file and what does it show in
its output? its output?
1. Do something more drastic like completely removing the homepage 1. Do something more drastic like completely removing the homepage
`index.html` (by the way, what's the deployment path?) and repeat the `index.html` and repeat the previous question.
previous question.
1. :question: What happened this time? 1. :question: What happened this time?
1. Nothwitstanding the idempotence principle, there's task which is always marked as "changed". 1. Notwithstanding the idempotence principle, there's task which is always
marked as "changed".
1. :question: Which one? Do you have an explanation? 1. :question: Which one? Do you have an explanation?
:hammer_and_wrench: **Write down your answers, please.** We will discuss about them later
on.
### Task #6: Adding a handler for nginx restart ### ### Task #6: Adding a handler for nginx restart ###
**Goal:** improve the playbook by restarting nginx only when needed. **Goal:** improve the playbook by restarting nginx only when needed.
The current version of the playbook _uncondizonally_ restarts nginx every time the playbook is The current version of the playbook _unconditionally_ restarts nginx every
run, irrespective of the managed server's state. This goes indeed a bit too time the playbook is run, irrespective of the managed server's state. This
far. goes indeed a bit too far.
By putting the nginx restart command into a *handler*, instead of a task, its By putting the nginx restart command into a *handler*, instead of a task, its
execution can be made _conditional_. The rationale is that nginx is restarted execution can be made _conditional_. The rationale is that nginx is restarted
...@@ -355,18 +399,52 @@ only if one of the tasks that affects nginx's configuration resulted in a ...@@ -355,18 +399,52 @@ only if one of the tasks that affects nginx's configuration resulted in a
change. change.
Consult the [Ansible documentation about Consult the [Ansible documentation about
handlers](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html). Modify handlers](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html#handlers). The mechanism is quite simple:
the playbook so that the nginx restart becomes a handler and the tasks that
potentially modify its configuration use *notify* to call the handler when - A task declares a `notify` attribute referencing either a handler name or
needed. defining a signal name. The notification will be delivered only if the task
reports a change.
- The referenced handler, or any handler that *listens* to the signal, will be
triggered by the notification.
Here's an example:
``` yaml
tasks:
- name: foo
...
# target handler
notify: handler_1
- name: bar
...
# signal
notify: a_signal
handlers:
- name: handler_1
...
# triggered by task 'foo'
- name: baz
...
# triggered by signal 'a_signal'
listen: signal
```
:hammer_and_wrench: **Over to you now.** Modify the playbook so that the nginx restart task
becomes a handler and the tasks that potentially modify its configuration use
*notify* to trigger the handler.
### Task 7: Add more managed servers ### ### Task #7: Add more managed servers ###
**Goal:** add more managed servers that will be configured by the same **Goal:** add more managed servers that will be configured by the same
playbook. playbook.
1. Create another Cloud instance using the same parameters as before. :bulb: There's an easy way to do that with Terraform's `count` mechanism ;-) 1. Create another Cloud instance using the same parameters as before. :bulb:
There's an easy way to do that with Terraform's `count` mechanism ;-)
2. Extend the `webservers` group in your inventory file to include this new 2. Extend the `webservers` group in your inventory file to include this new
managed host. managed host.
3. Re-run your web playbook. :question: What do you observe in Ansible's 3. Re-run your web playbook. :question: What do you observe in Ansible's
...@@ -386,3 +464,239 @@ playbook. ...@@ -386,3 +464,239 @@ playbook.
1. :question: If the fixes are to be permanently applied to a *subset* of 1. :question: If the fixes are to be permanently applied to a *subset* of
the managed servers, what do you need to do to bring only those servers the managed servers, what do you need to do to bring only those servers
to the new *fixed* state and the rest back to the *initial* state? to the new *fixed* state and the rest back to the *initial* state?
:hammer_and_wrench: **Write down your answers, please.** We will discuss about them later
on.
### Task #8: Doing things on demand ###
**Goal:** practice with the 'never' tag to trigger special actions on demand.
#### Task #8.1: Reverting tasks ####
Sometimes things must be done on demand, outside of the normal automated
workflow. Especially when developing a playbook, it would be handy to revert
some tasks and restart again the workflow from scratch, **without**
reinstalling the managed host.
The simplest way to implement a "revert all" function is based on a `shell`
command task assorted with the [special tag
`never`](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html#always-and-never),
which makes Ansible ignore it unless explicitly requested:
``` yaml
tasks:
- name: foo
...
- name: bar
...
- name: revert all
shell:
# the pipe instructs Ansible to read the following lines as a single
# script
cmd: |
# ensure failure at the first error
set -o errexit
# undo task foo
...
# undo task bar
...
tags:
- never
- revert_all
```
Then, you'd call it like this:
``` shell
lcl$ ansible-playbook [--check] YOUR_PLAYBOOK --tags='revert all'
```
:bulb: Notice how, in normal conditions, never-tagged tasks are not shown in
Ansible's report.
:hammer_and_wrench: **Over to you now.** Extend your playbook with a "revert all" task that
removes all nginx configuration files/symlinks and uninstalls nginx with an
explicit `apt` call (tip: use `--yes` to avoid the call getting stuck for
input).
#### Task #8.2: No-op trigger tasks ####
As we have seen, we can impose dependencies among tasks by using handlers and
notifications. But if a notifying task doesn't change, the corresponding
handler will never fire: this is problematic if the handler is broken and
needs to be fixed without reverting all or reinstalling your managed host. Or
maybe you just want to rerun a handler to fix an unexpected situation like a
crashed daemon.
So, you may ask: why not sticking a tag to the handler and run the playbook
with `--tags='my_handler'`? Unfortunately, [handlers ignore
tags!](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html#adding-tags-to-handlers)
Howvere, we can force handlers with a dummy (*no-op*) trigger task. Here we abuse the
[`assert`
module](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/assert_module.html)
coupled with the [`changed_when`
policy](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_error_handling.html#defining-changed)
to ensure that the task always reports a change:
``` yaml
tasks:
- name: force handler
assert: { that: true, quiet: true }
changed_when: true
notify: my handler
tags:
- never
- force_handler
handlers:
- name: my handler
...
```
Then, you'd run it with:
``` shell
lcl$ ansible-playbook [--check] YOUR_PLAYBOOK --tags='force_handler'
```
:hammer_and_wrench: **Over to you now.** Extend your playbook with a "force
restart" dummy task that triggers the handler "restart nginx" on demand.
### Task #9: Cascading handlers ###
**Goal:** deploying a complex application with task/handler dependencies.
Suppose you have an out-of-tree (i.e., not available via the target host
package manager) complex application composed of several modules which must
be, first, installed, then, deployed in the correct order. The correct
approach is to separate installation tasks from deployment handlers, and we
must ensure redeployment of dependent modules whenever a dependency has
changed. A possible solution is using *cascading* handlers: indeed, a handler
can notify another one!
For the sake of simplicity, let's prepare a dummy 2-module applications:
``` shell
lcl$ touch playbooks/files/app{1,2}
```
Then, we group the installation tasks into a `block`, just to use a convenient
Ansible's feature that allows use to run all tasks as a normal user and do
some rescue operations:
``` yaml
tasks:
- name: install complex app
block:
- name: install app1
copy:
src: files/app1
dest: /tmp/app1
notify: deploy app1
- name: install app2
copy:
src: files/app2
dest: /tmp/app2
notify: deploy app2
# Run as the default user
become: false
rescue:
# This is where we could fix things, but just simulate.
- name: Print when KO
debug:
msg: 'Something went wrong :-('
handlers:
- name: deploy app1
command: echo "App 1 deployed"
notify: deploy app2
- name: deploy app2
command: echo "App 2 deployed"
```
With the above code, `app1` and `app2` can be installed and deployed
independently. However, to redeploy `app2` whenever `app1` changes, the first
handler must notify the second one. Let's add the above snippet to our
`playbooks/web.yml` and try it.
:bulb: Notice the `debug` feature above. Also, to see the echo messages you
have to use the `-vv` switch.
At the first installation, we should get both installation task changed
**and** both handlers triggered, which makes 4 changes:
``` shell
lcl$ ansible-playbook -v playbooks/web.yml
...
TASK [install app1]
changed: [testserver]
TASK [install app2]
changed: [testserver]
RUNNING HANDLER [deploy app1]
changed: [testserver]
RUNNING HANDLER [deploy app2]
changed: [testserver]
...
PLAY RECAP
testserver : ... changed=4
```
Now, deploy a new version of `app1`. That should trigger one installation task
and both handlers (3 changes):
``` shell
lcl$ echo "foo v2" > playbooks/files/app1
lcl$ ansible-playbook -v playbooks/web.yml
...
TASK [install app1]
changed: [testserver]
TASK [install app2]
ok: [testserver]
RUNNING HANDLER [deploy app1]
changed: [testserver]
RUNNING HANDLER [deploy app2]
changed: [testserver]
...
PLAY RECAP
testserver : ... changed=3
```
### Task #10: provision a KinD cluster and deploy a load-balanced app ###
**Goal:** Apply all the learned Ansible features to deploy a complex Web service.
:warning: **Please, complete the [K8s Lab
tutorial](https://gitedu.hesge.ch/lsds/teaching/bachelor/cloud-and-deployment/lab-k8s)
before starting this exercise.**
Now you have all the skills to provision a KinD cluster and deploy a
Kubernetes-based http-echo application served by a MetalLB load balancer
service.
:hammer_and_wrench: **Over to you now.** Prepare another playbook based on the
`playbooks/kind-metallb.yml` boilerplate.
:bulb: To go as quick as possible, provision a 2-nodes KinD cluster with 2
http-echo pods on the only worker node.
Once done, use the following test track to finalize your workflow:
1. The first playbook run must install, provision and deploy everything. The
http-echo app shall respond with different messages.
1. Changing any configuration file must trigger a cascading handler series. E.g.:
- KinD configuration triggers all operations (apart from package installations).
- `metallb-native.yaml` triggers a whole app stack redeployment. :bulb:
`kubectl apply` can be called on a running stack.
- `metallb.yaml` triggers only the load balancer service redeployment.
---
- name: Provision KinD and deploy MetalLB
hosts: "{{ target_host | default('testserver') }}"
become: True
# General *bonuses*:
# - catching and reacting to errors,
# - using blocks,
# - any other advanced features, like catching a command output with
# 'register' for conditional tasks execution with
# 'changed_when: <condition>'.
tasks:
# Grouping all installation tasks in a 'block' is a bonus :-)
# - Installations shall be done as superuser.
# - Executable files shall be installed in '/usr/local/bin/': their
# creation shall trigger a status change.
# Install needed + aux packages with apt
# Install KinD with a shell command
# Install Kubectl with a shell command
# Configure KinD as the default user in its home.
# Configure the LoadBalancer service for the hhtp-echo app as the default
# user in its home.
# - Since 3 files are involved (please, download locally the
# "metallb-native.yml"), using a block here is recommended!
# - Configuration tasks shall trigger their respective deployments.
# Optional *bonus* on-demand tasks:
# - Rebuild all: manually reprovision the cluster and redeploy the
# application (tip: notify via a signal). This might require an extra
# task that checks the existence of a KinD cluster.
# - Delete the KinD cluster: handy to clean up and restart the workflow.
handlers:
# All cluster manipulations and app deployments shall happen here.
# - Use cascading handlers
# - Separate handlers for KinD and the 3 application tasks
# - Optional *bonus* handler to expose the LoadBalancer IP with socat
File deleted
---
- name: Configure webserver with nginx
hosts: webservers
become: True
tasks:
- name: install nginx
apt: name=nginx update_cache=yes
- name: copy nginx config file
copy: src=files/nginx.conf dest=/etc/nginx/sites-available/default
- name: enable configuration
file: >
dest=/etc/nginx/sites-enabled/default
src=/etc/nginx/sites-available/default
state=link
- name: copy index.html
template: src=templates/index.html.j2 dest=/usr/share/nginx/html/index.html mode=0644
- name: restart nginx
service: name=nginx state=restarted
web-basic.yml
\ No newline at end of file
---
- name: Configure webserver with nginx
hosts: webservers
become: True
tasks:
- name: install nginx
apt:
update_cache: yes
pkg: nginx
- name: copy nginx config file
copy:
src: files/nginx.conf
dest: /etc/nginx/sites-available/default
- name: enable configuration
file:
src: /etc/nginx/sites-available/default
dest: /etc/nginx/sites-enabled/default
state: link
- name: copy index.html
template:
src: templates/index.html.j2
dest: /usr/share/nginx/html/index.html
mode: 0644
- name: restart nginx
service:
name: nginx
state: restarted
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment