diff --git a/README.md b/README.md index 28f77a59dd9a1d060b91ae2c19677fcf502d8aa0..23cb62c8047f7be11a5203bff4482147067f6565 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,7 @@ lcl$ ansible --version ### Task #2: Create a VM on a Cloud of your choice ### -**Goal:** create a VM that will be managed by Ansible. +**Goal:** create a VM that will be managed by Ansible. :bulb: If you followed the [Terraform exercise](https://gitedu.hesge.ch/lsds/teaching/bachelor/cloud-and-deployment/lab-terraform/-/blob/main/SwitchEngines/README.md), use your TF plan to bring up your assigned sandbox instance and **skip** the rest of this task. Otherwise Chose any Cloud you are familiar with, then: @@ -59,10 +59,24 @@ lcl$ ssh -i <your-private-key> <your-vm-user>@<VM-DNS-name-or-IP-address> **Goal:** instruct Ansible about the machines (hosts) it shall manage. -Create a "sandbox" directory on your local machine `~/ansible/`. Inside -it, create a file called `hosts.yml` which will serve as the *inventory* file -(a.k.a. hostfile; it can also be written in `.ini` style), and add the -following: +Clone or, if you want to track your progress, fork and clone this GIT repo. + +It's not advised to run Ansible inside your repo clone because sensitive +information like SSH keys might get committed in Git. Thus, create a "sandbox" +directory on your local machine: +``` shell +lcl$ mkdir -p ~/ansible/ +lcl$ cd ~/ansible/ +``` +Copy your repo clone's content over to your sandbox (you might need to install `rsync`): +``` shell +lcl$ rsync -Cahv YOUR_REPOS/lab-ansible/ . +... +``` + +Open the file `hosts.yml` which is your *inventory* +(a.k.a. hostfile; it can also be written in `.ini` style). The contents look +like: ``` yaml all: @@ -73,32 +87,30 @@ all: ansible_ssh_private_key_file: <your-private-key> ``` -and check its validity: +You see? There's only one host named "testserver". +Complete the inventory with the missing data and check its validity: ``` shell -lcl$ ansible-inventory -i ~/ansible/hosts.yml --list +lcl$ ansible-inventory --list ``` Verify that you can use Ansible to connect to the testserver: ``` shell -lcl$ ansible -i ~/ansible/hosts.yml testserver -m ping +lcl$ ansible testserver -m ping ``` -You should see output similar to the following: - +You should get a "pong" in response (ignore the rest): ``` testserver | SUCCESS => { - "ansible_facts": { - "discovered_interpreter_python": "/usr/bin/python3" - }, - "changed": false, + ... "ping": "pong" } ``` Let's simplify the configuration of Ansible by using a user default -configuration file `~/.ansible.cfg` with contents (`.ini` style): +configuration file, either `~/.ansible.cfg` or ` ~/ansible/ansible.cfg`, with +contents (`.ini` style): ``` ini [defaults] @@ -110,12 +122,12 @@ deprecation_warnings = false ``` Notice that SSH's host key checking is disabled for convenience, as the -managed VM will get a new IP address each time it is recreated. **For +managed VM will probably get a new IP address each time it is recreated. **For production systems this is a security risk!** [See the doc](https://docs.ansible.com/ansible/latest/reference_appendices/config.html#the-configuration-file) for the other details. -With the above default settings our inventory file now simplifies to: +Now we can simplifies our inventory file: ``` yaml all: @@ -124,35 +136,30 @@ all: ansible_ssh_host: <VM-DNS-name-or-IP-address> ``` -Now calling ansible is simpler: +The ansible `-m command` option can be used to run arbitrary commands on the managed +hosts. F.i., to print the SSH user's name: ``` shell -lcl$ ansible testserver -m ping +lcl$ ansible testserver -m command -a whoami +... +debian ``` -The ansible command can be used to run arbitrary commands on the remote -machines. F.i., to execute the uptime command: -``` shell -lcl$ ansible testserver -m command -a uptime -testserver | CHANGED | rc=0 >> - 09:05:10 up 54 min, 2 users, load average: 0.01, 0.00, 0.00 -``` - - -### Task #4: Install web application ### +### Task #4: Install a web application ### **Goal:** configure the managed host to run a simple Web application served by -the nginx server. This necessitates four files: +the nginx server. + +We need four files: - 1. The inventory file `~/ansible/hosts.yml` as written before. - 2. A "playbook" that specifies what to configure - `~/ansible/playbooks/web.yml`. - 3. The nginx's configuration file `~/ansible/playbooks/files/nginx.conf`. - 4. A home page template for our Web site - `~/ansible/playbooks/templates/index.html.j2`. + 1. The inventory file `hosts.yml` as written before. + 2. A "playbook" that specifies what to obtain `playbooks/web.yml`. + 3. The nginx's configuration file `playbooks/files/nginx.conf`. + 4. A home page (Jinja template) for our Web site + `playbooks/templates/index.html.j2`. -To make our playbook more generic, we will use an ansible group called +To make our playbook more generic, we will use a host group called `webservers` holding our managed server, so that we can then later easily add more servers which Ansible will configure identically. Modify the hostfile by adding a definition of the group webservers, this is **left to you as an @@ -191,7 +198,7 @@ lcl$ ansible webservers -m ping The output should be the same as before. -Now, create a playbook named `~/ansible/playbooks/web.yml` with the following content: +Now, open the playbook file `playbooks/web.yml`. It should look like: ``` yaml --- @@ -200,104 +207,135 @@ Now, create a playbook named `~/ansible/playbooks/web.yml` with the following co become: True tasks: - name: install nginx - apt: name=nginx update_cache=yes + apt: + update_cache: yes + pkg: nginx + tags: install - name: copy nginx config file - copy: src=files/nginx.conf dest=/etc/nginx/sites-available/default + copy: + src: files/nginx.conf + dest: /etc/nginx/sites-available/default + tags: config - name: enable configuration - file: > - dest=/etc/nginx/sites-enabled/default - src=/etc/nginx/sites-available/default - state=link + file: + src: /etc/nginx/sites-available/default + dest: /etc/nginx/sites-enabled/default + state: link + tags: config - name: copy index.html - template: src=templates/index.html.j2 dest=/usr/share/nginx/html/index.html mode=0644 + template: + src: templates/index.html.j2 + dest: /usr/share/nginx/html/index.html + mode: 0644 + tags: deploy_app - name: restart nginx - service: name=nginx state=restarted + service: + name: nginx + state: restarted ``` -:question: How many tasks are declared in the above playbook? - -Then, create the nginx's configuration file -`~/ansible/playbooks/files/nginx.conf` referenced in the playbook references, -which the following content: - -``` nginx -server { - listen 80 default_server; - listen [::]:80 default_server ipv6only=on; - - root /usr/share/nginx/html; - index index.html index.htm; +:bulb: Notice the following aspects: + * The target `hosts`: it can be a group. + * The `became` key instructs Ansible to run as the *superuser* (normally + `root`) in the managed hosts. + * Each task uses a different *builtin module*. Can you name them? + * Builtin modules have self-explanatory: + * some of them are system-agnostic, e.g., the `copy` one. :question: Which other? + * others depend on the target OS. :question: Which ones? + * We can use `tags` to select specific tasks. E.g., with: + + ``` shell + lcl$ ansible-playbook playbooks/web.yml --tags='config' + ``` + +Check the validity of your playbook: +``` shell +lcl$ ansible-playbook --syntax-check playbooks/web.yml +``` - server_name localhost; +Check ("dry run") what would be done (something should *fail*): +``` shell +lcl$ ansible-playbook --syntax-check playbooks/web.yml - location / { - try_files $uri $uri/ =404; - } -} -``` +PLAY [Configure webserver with nginx] ***************** -As per the above configuration file, nginx will serve the Web app's homepage -from `index.html`. This will be generated via Ansible's templating engine from -a template, which has to be created as -`~/ansible/playbooks/templates/index.html.j2` and shall hold the following: +TASK [Gathering Facts] ******************************** +ok: [testserver] -``` html -<html> - <head> - <title>Welcome to Ansible</title> </head> - <body> - <h1>nginx, configured by Ansible</h1> - <h2>instance: {{ ansible_hostname }}</h2> - <p>If you can see this, Ansible successfully installed nginx.</p> - <p>{{ ansible_managed }}</p> - <p>Some facts Ansible gathered about this machine: - <table> - <tr><td>OS family:</td><td>{{ ansible_os_family }}</td></tr> - <tr><td>Distribution:</td><td>{{ ansible_distribution }}</td></tr> - <tr><td>Distribution version:</td><td>{{ ansible_distribution_version }}</td></tr> - </table> - </p> - </body> -</html> +TASK [install nginx] ********************************** +changed: [testserver] -``` +TASK [copy nginx config file] ************************* +changed: [testserver] -Now, run the newly created playbook to install and configure nginx, and to -deploy the Web app on the managed host: +TASK [enable configuration] *************************** +fatal: [testserver]: FAILED! => {"changed": false, "msg": "src file does not exist ..."} -``` shell -lcl$ ansible-playbook ~/ansible/playbooks/web.yml +PLAY RECAP ******************************************** +testserver: ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ``` -If everything goes well, the last output lines should be like: - - PLAY [Configure webserver with nginx] ************************************** +:bulb: Notice the first task executed by default: "Gathering Facts". That's +where Ansible queries the target infrastructure to collect useful information +in order to establish the infrastructure's *state*. - TASK [Gathering Facts] ***************************************************** - ok: [testserver] +Indeed, the task "enable configuration" is expected to fail because the +presence of the remote source file depends on the previous task which was +simulated. That also causes the workflow to stop prematurely. - TASK [install nginx] ******************************************************* - changed: [testserver] +However, we can ignore errors in dry mode, just modify the failing +task with: +``` yaml +- name: enable configuration + ignore_errors: "{{ ansible_check_mode }}" + ... +``` - TASK [copy nginx config file] *********************************************** - changed: [testserver] +:bulb: Notice the special syntax `{{ ... }}`: that's the [Jinja2 +templating](https://jinja.palletsprojects.com/en/stable/) way of substituting +variables. The internal variable `ansible_check_mode` provides the right +condition to the task's `ignore_errors` policy. - TASK [enable configuration] ************************************************* - ok: [testserver] +Now, try again the `--check` and you'll see that the workflow completes the 6 +tasks with the failing one marked as *skipped*. - TASK [copy index.html] ****************************************************** - changed: [testserver] +Then, have a look at nginx's configuration file `playbooks/files/nginx.conf` +referenced in the playbook. It contains the declaration of the Web app's homepage: +``` nginx +server { + ... + index index.html index.htm; + ... +} +``` - TASK [restart nginx] ******************************************************** - changed: [testserver] +The file `index.html`will be rendered by Ansible's templating engine from the +Jinja2 template `playbooks/templates/index.html.j2`, which uses some internal +Ansible variables: +``` html +... +<p>Some facts Ansible gathered about this machine: + <table> + <tr><td>OS family:</td><td>{{ ansible_os_family }}</td></tr> + <tr><td>Distribution:</td><td>{{ ansible_distribution }}</td></tr> + <tr><td>Distribution version:</td><td>{{ ansible_distribution_version }}</td></tr> + </table> +</p> +... +``` +Those variables above will be instantiated during the "Gathering Facts" stage. - PLAY RECAP ****************************************************************** - testserver : ok=6 changed=4 unreachable=0 failed=0 +Now, run the newly created playbook: +``` shell +lcl$ ansible-playbook playbooks/web.yml +``` +If everything goes well, the output should confirm a task recap of: ok=6, +changed=5. Point your Web browser to the address of the managed server (mind that we are not using SSL): `http://<VM-DNS-name-or-IP-address>`. You should see the @@ -312,7 +350,9 @@ State Configuration. According to this principle, before doing anything, Ansible should establish the current state of the managed server, compare it to the desired state expressed in the playbook, and then only perform the actions necessary to -bring the current state to the desired state. In other words, if the managed system is already in its desired state, nothing will be done (apart from some notable exception -- see below): that's called "idempotence". +bring the current state to the desired state. In other words, if the managed +system is already in its desired state, nothing will be done (apart from some +notable exception -- see below): that's called "idempotence". In its ouput, Ansible marks tasks where it had to perform some action as *changed* whereas tasks where the actual state already corresponded to the @@ -334,20 +374,24 @@ desired state as *ok*. 1. :question: What does Ansible do to the file and what does it show in its output? 1. Do something more drastic like completely removing the homepage - `index.html` (by the way, what's the deployment path?) and repeat the - previous question. + `index.html` and repeat the previous question. 1. :question: What happened this time? - 1. Nothwitstanding the idempotence principle, there's task which is always marked as "changed". + 1. Notwithstanding the idempotence principle, there's task which is always + marked as "changed". 1. :question: Which one? Do you have an explanation? +:hammer_and_wrench: **Write down your answers, please.** We will discuss about them later +on. + + ### Task #6: Adding a handler for nginx restart ### **Goal:** improve the playbook by restarting nginx only when needed. -The current version of the playbook _uncondizonally_ restarts nginx every time the playbook is -run, irrespective of the managed server's state. This goes indeed a bit too -far. +The current version of the playbook _unconditionally_ restarts nginx every +time the playbook is run, irrespective of the managed server's state. This +goes indeed a bit too far. By putting the nginx restart command into a *handler*, instead of a task, its execution can be made _conditional_. The rationale is that nginx is restarted @@ -355,18 +399,52 @@ only if one of the tasks that affects nginx's configuration resulted in a change. Consult the [Ansible documentation about -handlers](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html). Modify -the playbook so that the nginx restart becomes a handler and the tasks that -potentially modify its configuration use *notify* to call the handler when -needed. +handlers](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html#handlers). The mechanism is quite simple: +- A task declares a `notify` attribute referencing either a handler name or + defining a signal name. The notification will be delivered only if the task + reports a change. +- The referenced handler, or any handler that *listens* to the signal, will be + triggered by the notification. -### Task 7: Add more managed servers ### +Here's an example: + +``` yaml + tasks: + - name: foo + ... + # target handler + notify: handler_1 + + - name: bar + ... + # signal + notify: a_signal + + handlers: + - name: handler_1 + ... + # triggered by task 'foo' + + - name: baz + ... + # triggered by signal 'a_signal' + listen: signal +``` + + +:hammer_and_wrench: **Over to you now.** Modify the playbook so that the nginx restart task +becomes a handler and the tasks that potentially modify its configuration use +*notify* to trigger the handler. + + +### Task #7: Add more managed servers ### **Goal:** add more managed servers that will be configured by the same playbook. - 1. Create another Cloud instance using the same parameters as before. :bulb: There's an easy way to do that with Terraform's `count` mechanism ;-) + 1. Create another Cloud instance using the same parameters as before. :bulb: + There's an easy way to do that with Terraform's `count` mechanism ;-) 2. Extend the `webservers` group in your inventory file to include this new managed host. 3. Re-run your web playbook. :question: What do you observe in Ansible's @@ -386,3 +464,239 @@ playbook. 1. :question: If the fixes are to be permanently applied to a *subset* of the managed servers, what do you need to do to bring only those servers to the new *fixed* state and the rest back to the *initial* state? + +:hammer_and_wrench: **Write down your answers, please.** We will discuss about them later +on. + + +### Task #8: Doing things on demand ### + +**Goal:** practice with the 'never' tag to trigger special actions on demand. + + +#### Task #8.1: Reverting tasks #### + +Sometimes things must be done on demand, outside of the normal automated +workflow. Especially when developing a playbook, it would be handy to revert +some tasks and restart again the workflow from scratch, **without** +reinstalling the managed host. + +The simplest way to implement a "revert all" function is based on a `shell` +command task assorted with the [special tag +`never`](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html#always-and-never), +which makes Ansible ignore it unless explicitly requested: + +``` yaml + tasks: + - name: foo + ... + + - name: bar + ... + + - name: revert all + shell: + # the pipe instructs Ansible to read the following lines as a single + # script + cmd: | + # ensure failure at the first error + set -o errexit + # undo task foo + ... + # undo task bar + ... + tags: + - never + - revert_all +``` + +Then, you'd call it like this: +``` shell +lcl$ ansible-playbook [--check] YOUR_PLAYBOOK --tags='revert all' +``` + +:bulb: Notice how, in normal conditions, never-tagged tasks are not shown in +Ansible's report. + +:hammer_and_wrench: **Over to you now.** Extend your playbook with a "revert all" task that +removes all nginx configuration files/symlinks and uninstalls nginx with an +explicit `apt` call (tip: use `--yes` to avoid the call getting stuck for +input). + + +#### Task #8.2: No-op trigger tasks #### + +As we have seen, we can impose dependencies among tasks by using handlers and +notifications. But if a notifying task doesn't change, the corresponding +handler will never fire: this is problematic if the handler is broken and +needs to be fixed without reverting all or reinstalling your managed host. Or +maybe you just want to rerun a handler to fix an unexpected situation like a +crashed daemon. + +So, you may ask: why not sticking a tag to the handler and run the playbook +with `--tags='my_handler'`? Unfortunately, [handlers ignore +tags!](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html#adding-tags-to-handlers) + +Howvere, we can force handlers with a dummy (*no-op*) trigger task. Here we abuse the +[`assert` +module](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/assert_module.html) +coupled with the [`changed_when` +policy](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_error_handling.html#defining-changed) +to ensure that the task always reports a change: +``` yaml + tasks: + - name: force handler + assert: { that: true, quiet: true } + changed_when: true + notify: my handler + tags: + - never + - force_handler + + handlers: + - name: my handler + ... +``` + +Then, you'd run it with: +``` shell +lcl$ ansible-playbook [--check] YOUR_PLAYBOOK --tags='force_handler' +``` + +:hammer_and_wrench: **Over to you now.** Extend your playbook with a "force +restart" dummy task that triggers the handler "restart nginx" on demand. + + +### Task #9: Cascading handlers ### + +**Goal:** deploying a complex application with task/handler dependencies. + +Suppose you have an out-of-tree (i.e., not available via the target host +package manager) complex application composed of several modules which must +be, first, installed, then, deployed in the correct order. The correct +approach is to separate installation tasks from deployment handlers, and we +must ensure redeployment of dependent modules whenever a dependency has +changed. A possible solution is using *cascading* handlers: indeed, a handler +can notify another one! + +For the sake of simplicity, let's prepare a dummy 2-module applications: + +``` shell +lcl$ touch playbooks/files/app{1,2} +``` + +Then, we group the installation tasks into a `block`, just to use a convenient +Ansible's feature that allows use to run all tasks as a normal user and do +some rescue operations: +``` yaml + tasks: + - name: install complex app + block: + - name: install app1 + copy: + src: files/app1 + dest: /tmp/app1 + notify: deploy app1 + + - name: install app2 + copy: + src: files/app2 + dest: /tmp/app2 + notify: deploy app2 + + # Run as the default user + become: false + rescue: + # This is where we could fix things, but just simulate. + - name: Print when KO + debug: + msg: 'Something went wrong :-(' + + handlers: + - name: deploy app1 + command: echo "App 1 deployed" + notify: deploy app2 + + - name: deploy app2 + command: echo "App 2 deployed" +``` + +With the above code, `app1` and `app2` can be installed and deployed +independently. However, to redeploy `app2` whenever `app1` changes, the first +handler must notify the second one. Let's add the above snippet to our +`playbooks/web.yml` and try it. + +:bulb: Notice the `debug` feature above. Also, to see the echo messages you +have to use the `-vv` switch. + +At the first installation, we should get both installation task changed +**and** both handlers triggered, which makes 4 changes: +``` shell +lcl$ ansible-playbook -v playbooks/web.yml +... +TASK [install app1] +changed: [testserver] + +TASK [install app2] +changed: [testserver] + +RUNNING HANDLER [deploy app1] +changed: [testserver] + +RUNNING HANDLER [deploy app2] +changed: [testserver] +... + +PLAY RECAP +testserver : ... changed=4 +``` + +Now, deploy a new version of `app1`. That should trigger one installation task +and both handlers (3 changes): +``` shell +lcl$ echo "foo v2" > playbooks/files/app1 +lcl$ ansible-playbook -v playbooks/web.yml +... +TASK [install app1] +changed: [testserver] + +TASK [install app2] +ok: [testserver] + +RUNNING HANDLER [deploy app1] +changed: [testserver] + +RUNNING HANDLER [deploy app2] +changed: [testserver] +... + +PLAY RECAP +testserver : ... changed=3 +``` + +### Task #10: provision a KinD cluster and deploy a load-balanced app ### + +**Goal:** Apply all the learned Ansible features to deploy a complex Web service. + +:warning: **Please, complete the [K8s Lab +tutorial](https://gitedu.hesge.ch/lsds/teaching/bachelor/cloud-and-deployment/lab-k8s) +before starting this exercise.** + +Now you have all the skills to provision a KinD cluster and deploy a +Kubernetes-based http-echo application served by a MetalLB load balancer +service. + +:hammer_and_wrench: **Over to you now.** Prepare another playbook based on the +`playbooks/kind-metallb.yml` boilerplate. + +:bulb: To go as quick as possible, provision a 2-nodes KinD cluster with 2 +http-echo pods on the only worker node. + +Once done, use the following test track to finalize your workflow: + 1. The first playbook run must install, provision and deploy everything. The + http-echo app shall respond with different messages. + 1. Changing any configuration file must trigger a cascading handler series. E.g.: + - KinD configuration triggers all operations (apart from package installations). + - `metallb-native.yaml` triggers a whole app stack redeployment. :bulb: + `kubectl apply` can be called on a running stack. + - `metallb.yaml` triggers only the load balancer service redeployment. diff --git a/ansible/playbooks/kind-metallb.yml b/ansible/playbooks/kind-metallb.yml new file mode 100644 index 0000000000000000000000000000000000000000..951e35e8733897d393bd00908ed1300b4497e0a8 --- /dev/null +++ b/ansible/playbooks/kind-metallb.yml @@ -0,0 +1,47 @@ +--- +- name: Provision KinD and deploy MetalLB + hosts: "{{ target_host | default('testserver') }}" + become: True + + # General *bonuses*: + # - catching and reacting to errors, + # - using blocks, + # - any other advanced features, like catching a command output with + # 'register' for conditional tasks execution with + # 'changed_when: <condition>'. + + tasks: + # Grouping all installation tasks in a 'block' is a bonus :-) + # - Installations shall be done as superuser. + # - Executable files shall be installed in '/usr/local/bin/': their + # creation shall trigger a status change. + + # Install needed + aux packages with apt + + # Install KinD with a shell command + + # Install Kubectl with a shell command + + + # Configure KinD as the default user in its home. + + # Configure the LoadBalancer service for the hhtp-echo app as the default + # user in its home. + # - Since 3 files are involved (please, download locally the + # "metallb-native.yml"), using a block here is recommended! + # - Configuration tasks shall trigger their respective deployments. + + + # Optional *bonus* on-demand tasks: + + # - Rebuild all: manually reprovision the cluster and redeploy the + # application (tip: notify via a signal). This might require an extra + # task that checks the existence of a KinD cluster. + + # - Delete the KinD cluster: handy to clean up and restart the workflow. + + handlers: + # All cluster manipulations and app deployments shall happen here. + # - Use cascading handlers + # - Separate handlers for KinD and the 3 application tasks + # - Optional *bonus* handler to expose the LoadBalancer IP with socat diff --git a/ansible/playbooks/web-advnc.yml b/ansible/playbooks/web-advnc.yml deleted file mode 100644 index e52252de3326c24e9494c2146aba99ed52cbe7a8..0000000000000000000000000000000000000000 Binary files a/ansible/playbooks/web-advnc.yml and /dev/null differ diff --git a/ansible/playbooks/web-basic.yml b/ansible/playbooks/web-basic.yml deleted file mode 100644 index ba39a493b9904f060c94dc8443860832042e1d4d..0000000000000000000000000000000000000000 --- a/ansible/playbooks/web-basic.yml +++ /dev/null @@ -1,22 +0,0 @@ ---- -- name: Configure webserver with nginx - hosts: webservers - become: True - tasks: - - name: install nginx - apt: name=nginx update_cache=yes - - - name: copy nginx config file - copy: src=files/nginx.conf dest=/etc/nginx/sites-available/default - - - name: enable configuration - file: > - dest=/etc/nginx/sites-enabled/default - src=/etc/nginx/sites-available/default - state=link - - - name: copy index.html - template: src=templates/index.html.j2 dest=/usr/share/nginx/html/index.html mode=0644 - - - name: restart nginx - service: name=nginx state=restarted diff --git a/ansible/playbooks/web.yml b/ansible/playbooks/web.yml deleted file mode 120000 index 6afc5c94dad3e3cafbf597e6c67a186d4efcd48e..0000000000000000000000000000000000000000 --- a/ansible/playbooks/web.yml +++ /dev/null @@ -1 +0,0 @@ -web-basic.yml \ No newline at end of file diff --git a/ansible/playbooks/web.yml b/ansible/playbooks/web.yml new file mode 100644 index 0000000000000000000000000000000000000000..113bfc694a41b827e52c170bbc2d0187e748b6a5 --- /dev/null +++ b/ansible/playbooks/web.yml @@ -0,0 +1,31 @@ +--- +- name: Configure webserver with nginx + hosts: webservers + become: True + tasks: + - name: install nginx + apt: + update_cache: yes + pkg: nginx + + - name: copy nginx config file + copy: + src: files/nginx.conf + dest: /etc/nginx/sites-available/default + + - name: enable configuration + file: + src: /etc/nginx/sites-available/default + dest: /etc/nginx/sites-enabled/default + state: link + + - name: copy index.html + template: + src: templates/index.html.j2 + dest: /usr/share/nginx/html/index.html + mode: 0644 + + - name: restart nginx + service: + name: nginx + state: restarted