Skip to content
Snippets Groups Projects
Commit 75976e23 authored by marcoemi.poleggi's avatar marcoemi.poleggi
Browse files

TF + AWS done! WIP: TF + SwitchEngines

parent c1118ed5
Branches
Tags
No related merge requests found
AWS/main.tf.advnc filter=git-crypt diff=git-crypt
main.tf.basic
\ No newline at end of file
File added
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "app_server" {
ami = "<your-EC2-AMI-ID>"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
output "instance_id" {
description = "ID of the EC2 instance"
value = aws_instance.app_server.id
}
output "instance_public_ip" {
description = "Public IP address of the EC2 instance"
value = aws_instance.app_server.public_ip
}
output "public_ip" {
value = aws_instance.app_server.public_ip
}
output "instance_tags" {
description = "Public IP address of the EC2 instance"
value = aws_instance.app_server.tags_all
}
#cloud-config
#^^^^^^^^^^^^
# DO NOT TOUCH the first line!
---
groups:
- ubuntu: [root, sys]
- terraform
users:
- default
- name: terraform
gecos: terraform
primary_group: terraform
groups: users, admin
ssh_authorized_keys:
- <your-SSH-pub-key-on-one-line>
variable "instance_name" {
description = "Value of the Name tag for the EC2 instance"
type = string
default = "AnotherAppServerInstance"
}
\ No newline at end of file
## Lab: Cloud provisioning/orchestration - Terraform and AWS
Lab with Terraform and any Cloud
Lab template for a Cloud provisioning/orchestration exercise with Terraform
(TF) and OpenStack/SwitchEngines.
## Pedagogical objectives ##
* Become familiar with a Cloud provisioning/orchestration tool
* Provision Cloud resources in an automated fashion
## Tasks ##
In this lab you will perform a number of tasks and document your progress in a
lab report. Each task specifies one or more deliverables to be
produced. Collect all the deliverables in your lab report.
**N.B.** Some tasks require interacting with your local machine's OS: any
related commands are supposed to be run into a terminal with the following
conventions about the *command line prompt*:
* `#`: execution with super user's (root) privileges
* `$`: execution with normal user's privileges
* `lcl`: your local machine
* `ins`: your VM instance
TF CLI commands' output follow a diff-style convention of prefixing lines with
special marks to explain what is (or would be) going on:
* `+`: something is added
* `-`: something is removed/destroyed
* `-/+`: a resource is destroyed and recreated
* `~`: something is modified in place
### Task #1: install Terraform CLI and OpenStack CLI ###
**Goal:** install the Terraform CLI and OpenStack CLI on your local machine.
Please, refer to your OS documentation for the proper way to do so
1. [Terraform
CLI](https://learn.hashicorp.com/tutorials/terraform/install-cli)
v1.1.4. Skip the TF "Quick start tutorial" (Docker).
1. [OpenStack CLI](https://docs.openstack.org/newton/user-guide/common/cli-install-openstack-command-line-clients.html) v5.7.0.
1. Create a new [OpenStack Access
Credentials](https://engines.switch.ch/horizon/identity/application_credentials/)
and save as `~/.config/openstack/clouds.yaml`. With SwitchEngine, the
cloud name to use in this lab should be `engines`.
1. Verify that your credentials are OK:
``` shell
lcl$ $ openstack --os-cloud=engines [application] credential list
```
### Task #2: configure TF for OpenStack ###
**Goal:** instruct TF to handle a single OpenStack instance.
<a name="image-query"></a>Find out the smallest image to use for a Debian server:
``` shell
lcl$ openstack --os-cloud=engines image list --limit=20 --public --status=active --sort-column=Size -c ID -c Name -c Size --long
+--------------------------------------+-------------------------------------+-------------+
| ID | Name | Size |
+--------------------------------------+-------------------------------------+-------------+
| c6596c8a-b074-4a72-9c8c-411d1cb11113 | Debian Buster 10 (SWITCHengines) | 1537409024 |
...
```
:bulb: We use the first ID found for the placeholder `<your-image-ID>`. In
SwitchEngines this is 1.5GB.
Find out the smallest instance *flavor* that acommodates our Debian image.
``` shell
lcl$ openstack --os-cloud=engines flavor list --sort-column=RAM --sort-column=Disk --min-disk=5 --min-ram=1024 --limit=1 --public
+----+----------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+----------+------+------+-----------+-------+-----------+
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
+----+----------+------+------+-----------+-------+-----------+
```
:bulb: Our flavor will be `m1.small` for the placeholder `<your-flavor>`.
**@@@ RESTART FROM HERE @@@**
Create a "sandbox" directory on your local machine `~/terraform/AWS/`. Inside
it, create a file called `main.tf` (written in HCL language), the
infrastructure *definition* file, with the following content:
``` hcl
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "app_server" {
ami = "<your-image-ID>"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
```
Initialize your sandbox with:
``` shell
lcl$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 3.27"...
- Installing hashicorp/aws v3.27.0...
- Installed hashicorp/aws v3.27.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository so
that Terraform can guarantee to make the same selections by default when you run
"terraform init" in the future.
Terraform has been successfully initialized!
...
```
Have a look inside the newly created sub-directory
`~/terraform/AWS/.terraform/`, you'll find the required `aws` provider module
that has been downloaded during the initialization.
It's good practice to format and validate your configuration:
``` shell
lcl$ terraform fmt
main.tf
lcl$ terraform validate
Success! The configuration is valid.
```
### Task #3: deploy your AWS infrastructure ###
**Goal:** provision your AWS EC2 instance via TF.
Run the following command, confirm by typing "yes" and observe it's output:
``` shell
lcl$ terraform apply
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.app_server will be created
+ resource "aws_instance" "app_server" {
+ ami = "ami-0fa37863afb290840"
+ arn = (known after apply)
...
+ instance_type = "t2.micro"
...
+ tags = {
+ "Name" = "ExampleAppServerInstance"
}
...
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_instance.app_server: Creating...
...
aws_instance.app_server: Creation complete after 38s [id=i-0155ba9d77ee0a854]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
```
The information shown above before the prompt for action is the `execution
plan`: the `+` prefix mark things to add. Of course, many details are unknown
until the corresponding resource is instantiated.
:question: How many resources were created?
You can verify via the AWS dashboard that one EC2 instance has been created.
The state of the resources acted upon is locally stored. Let's see what's in
our sandbox:
``` shell
lcl$ tree -a
.
├── .terraform
│ └── providers
...
├── .terraform.lock.hcl
├── main.tf
└── terraform.tfstate
```
:question: What's in file `terraform.tfstate`? The answer comes from the
following commands:
``` shell
lcl$ terraform state list
aws_instance.app_server
```
It confirms that we're tracking one AWS instance. Let's dig a bit more:
``` shell
lcl$ terraform show
# aws_instance.app_server:
resource "aws_instance" "app_server" {
ami = "ami-0fa37863afb290840"
arn = "arn:aws:ec2:us-east-1:768034348959:instance/i-0155ba9d77ee0a854"
associate_public_ip_address = true
availability_zone = "us-east-1e"
...
id = "i-0155ba9d77ee0a854"
instance_initiated_shutdown_behavior = "stop"
instance_state = "running"
instance_type = "t2.micro"
...
private_dns = "ip-172-31-94-207.ec2.internal"
private_ip = "172.31.94.207"
public_dns = "ec2-3-94-184-169.compute-1.amazonaws.com"
public_ip = "3.94.184.169"
...
tags = {
"Name" = "ExampleAppServerInstance"
}
...
vpc_security_group_ids = [
"sg-0c420780b4f729d3e",
]
...
}
```
The above command output provides useful runtime (state) information, like the
instance IP's address. Indeed, there is a kind of *digital twin* stored inside
the file `terraform.tfstate`.
:question: Is that instance accessible via SSH? Give it a try. If not, why?
Now, stop the running instance (its ID is shown above ;-):
``` shell
lcl$ aws ec2 stop-instances --instance-ids i-0155ba9d77ee0a854
```
wait some seconds and test again:
``` shell
lcl$ terraform show | grep instance_state
instance_state = "running"
```
How come? We just discovered that TF read only the local status of a
resource. So, let's refresh it, and check again:
``` shell
lcl$ terraform refresh
aws_instance.app_server: Refreshing state... [id=i-0155ba9d77ee0a854]
lcl$ terraform show | grep instance_state
instance_state = "stopped"
```
Ah-ha!
Hold on a second: our TF plan does not specify the desired status of a
resource. What happens if we reapply the plan? Lets' try:
``` shell
lcl$ terraform apply -auto-approve
aws_instance.app_server: Refreshing state... [id=i-0155ba9d77ee0a854]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
lcl$ terraform show | grep instance_state
instance_state = "stopped"
```
:warning: Apply was run in `-auto-approve` (non interactive) mode which
assumes "yes" to all questions. Use with care!
From the above commands' output we see that
* the local state is refreshed before doing anything,
* no changes are applied, and
* huh?... the resource is still stopped.
Concerning the last point above, think about the basic objectives of TF: as a
provisioning tool it is concerned with the *existence* of a resource, not with
its *runtime* state. This latter is the business of configuration management
tools. :bulb: There is no way with TF to specify a resource's desired runtime
state.
### Task #4: change your infrastructure ###
**Goal:** modify the resource created before, and learn how to apply changes
to a Terraform project.
Restart your managed instance:
``` shell
lcl$ aws ec2 start-instances --instance-ids i-0155ba9d77ee0a854
```
Refresh TF's view of the world:
``` shell
lcl$ terraform refresh
aws_instance.app_server: Refreshing state... [id=i-0155ba9d77ee0a854]
lcl$ terraform show | grep instance_state
instance_state = "running"
```
Replace the resource's `ami` in `main.tf` with the second one found from the
[catalog query done above](#image-query) (or another one available with your
account). Before applying our new plan, let's see what TF thinks of it:
``` shell
lcl$ terraform plan -out=change-AMI.tfplan
aws_instance.app_server: Refreshing state... [id=i-0155ba9d77ee0a854]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_instance.app_server must be replaced
-/+ resource "aws_instance" "app_server" {
~ ami = "ami-0fa37863afb290840" -> "ami-0e2512bd9da751ea8" # forces replacement
...
Plan: 1 to add, 0 to change, 1 to destroy.
Saved the plan to: change-AMI.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "change-AMI.tfplan"
```
:bulb: Remarks:
* The change we want to apply is destructive!
* We saved our plan. :question: Why? It is not really necessary in a simple
scenario like ours, however a more complex IaC workflow might require plan
artifacts to be programmatically validated and versioned.
Apply the saved plan:
``` shell
lcl$ terraform apply change-AMI.tfplan
aws_instance.app_server: Destroying... [id=i-0155ba9d77ee0a854]
...
aws_instance.app_server: Destruction complete after 33s
aws_instance.app_server: Creating...
...
aws_instance.app_server: Creation complete after 48s [id=i-0470db35749548101]
```
:bulb: What? Not asking for confirmation? Indeed, a saved plan is intended for
automated workflows! Moreover, a saved plan will come handy for rolling back a
broken infrastructure to the last working setup.
:question: What if we did not save our plan, and called a plain apply command?
Would the result be the same?
### Task #5: input variables ###
**Goal:** make a TF plan more flexible via input variables.
Our original plan has all its content hard-coded. Let's make it more flexible
with some input variables stored in a separate `variables.tf` file inside your
TF sandbox:
``` hcl
variable "instance_name" {
description = "Value of the Name tag for the EC2 instance"
type = string
default = "AnotherAppServerInstance"
}
```
Then modify the `main.tf` as follows:
``` hcl
resource "aws_instance" "app_server" {
ami = "ami-0e2512bd9da751ea8"
instance_type = "t2.micro"
tags = {
- Name = "ExampleAppServerInstance"
+ Name = var.instance_name
}
}
```
Apply the changes:
``` shell
lcl$ terraform apply -auto-approve
aws_instance.app_server: Refreshing state... [id=i-0470db35749548101]
...
~ update in-place
Terraform will perform the following actions:
# aws_instance.app_server will be updated in-place
~ resource "aws_instance" "app_server" {
id = "i-0470db35749548101"
~ tags = {
~ "Name" = "ExampleAppServerInstance" -> "AnotherAppServerInstance"
}
~ tags_all = {
~ "Name" = "ExampleAppServerInstance" -> "AnotherAppServerInstance"
}
...
}
Plan: 0 to add, 1 to change, 0 to destroy.
...
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
```
:bulb: **Exercise:** input variables can also be passed on the `apply` command
line. Find how to do that with another different value for the variable
`instance_name`. :question: Would this last change be persistent if we rerun a
plain `terraform apply`?
### Task #6: queries with outputs ###
**Goal:** use output values to query a provisioned infrastructure.
We have seen in the previous tasks that the infrastructure's status can be
displayed via `terraform show`: a rather clumsy way, if you just want to
extract some specific information. A better programmatic way of querying your
infrastructure makes use of "outputs". Put the following in a file called
`~/terraform/AWS/outputs.tf`:
``` hcl
output "instance_id" {
description = "ID of the EC2 instance"
value = aws_instance.app_server.id
}
output "instance_public_ip" {
description = "Public IP address of the EC2 instance"
value = aws_instance.app_server.public_ip
}
```
We have declared two outputs. As usual with TF, before querying their
associated values, we need to apply the changes:
``` shell
lcl$ terraform apply -auto-approve
...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
instance_id = "i-0470db35749548101"
instance_public_ip = "34.201.252.63"
instance_tags = tomap({
"Name" = "AnotherAppServerInstance"
})
```
So, we already got the needed information, but, within a workflow, it is more
practical to do something like:
``` shell
lcl$ terraform output -json instance_tags
{"Name":"AnotherAppServerInstance"}
```
:question: What if the `Name` tag is changed outside TF? Try it.
:question: What must be done to have TF respect that external change?
:question: How to revert an external change via TF?
### Task #7: SSH provisioning with Cloud-Init ###
**Goal:** use Cloud-Init to provision an SSH access to your TF-managed
instance.
Did you try to SSH into the instance you created via TF? It cannot work,
because we did not instructed TF about networks, users, keys or anything
else. **This is left entirely to you as an exercise.** You need to:
1. Destroy your infrastructure. There's a special TF command for that.
1. Create an SSH key pair `tf-cloud-init`.
1. Create a new cloud-init file
`~/terraform/AWS/scripts/add-ssh.yaml` with the following content:
``` yaml
#cloud-config
#^^^^^^^^^^^^
# DO NOT TOUCH the first line!
---
groups:
- ubuntu: [root, sys]
- terraform
users:
- default
- name: terraform
gecos: terraform
primary_group: terraform
groups: users, admin
ssh_authorized_keys:
- <your-SSH-pub-key-on-one-line>
```
:warning: **Mind that the first line of this file must spell exactly
`#cloud-config`**!
1. Modify the `main.tf` as follows:
1. add a `resource` block of type `"aws_security_group"` allowing ingress
ports 22 and 80 from any address, with any egress port open;
1. add a `data` block referencing the above authorization file as a
`"template_file"` type;
1. extend the `"aws_instance"` resource to:
1. associate a public IP address,
1. link the `data` block to a user data attribute;
1. add an output `"public_ip"`.
When done, *init* your new plan, *validate* and *apply* it. Verify that you
can SSH as user `terraform` (not the customary `ubuntu`) into your instance:
``` shell
lcl$ ssh terraform@$(terraform output -raw public_ip) -i ../tf-cloud-init
```
...@@ -58,6 +58,23 @@ AWS](https://learn.hashicorp.com/tutorials/terraform/aws-build). ...@@ -58,6 +58,23 @@ AWS](https://learn.hashicorp.com/tutorials/terraform/aws-build).
**Goal:** instruct TF to handle a single AWS EC2 instance. **Goal:** instruct TF to handle a single AWS EC2 instance.
<a name="AMI-query"></a>Find out which AMI to use for a recent Ubuntu server.
You can query the AMI Catalog via
a. [AWS dashboard](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#AMICatalog:), searching "Ubuntu", selecting "Quickstart AMIs" and filtering by "Free tier",
b. or with CLI, and get the 2 most recent AMIs (it will not list free-tier ones):
``` shell
lcl$ aws ec2 describe-images --region us-east-1 \
--filters "Name=root-device-type,Values=ebs" \
"Name=name,Values=*ubuntu*20*server*" \
"Name=architecture,Values=x86_64" \
--query "reverse(sort_by(Images, &ImageId))[:2].[ImageId]" --output text
ami-0ff99e17387586219
ami-0fe93fb38d72b89b6
```
:bulb: In the following task(s), we use the first AMI found for the
placeholder `<your-EC2-AMI-ID>`.
Create a "sandbox" directory on your local machine `~/terraform/AWS/`. Inside Create a "sandbox" directory on your local machine `~/terraform/AWS/`. Inside
it, create a file called `main.tf` (written in HCL language), the it, create a file called `main.tf` (written in HCL language), the
infrastructure *definition* file, with the following content: infrastructure *definition* file, with the following content:
...@@ -80,7 +97,7 @@ provider "aws" { ...@@ -80,7 +97,7 @@ provider "aws" {
} }
resource "aws_instance" "app_server" { resource "aws_instance" "app_server" {
ami = "<your-AWS-AMI-ID>" ami = "<your-EC2-AMI-ID>"
instance_type = "t2.micro" instance_type = "t2.micro"
tags = { tags = {
...@@ -89,20 +106,6 @@ resource "aws_instance" "app_server" { ...@@ -89,20 +106,6 @@ resource "aws_instance" "app_server" {
} }
``` ```
<a name="AMI-query"></a>To find `<your-AWS-AMI-ID>`, you can query the AMI Catalog via
a. [AWS dashboard](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#AMICatalog:), searching "Ubuntu", selecting "Quickstart AMIs" and filtering by "Free-tier",
b. or (preferred) with CLI, and get the 2 most recent AMIs:
``` shell
lcl$ aws ec2 describe-images --region us-east-1 \
--filters "Name=root-device-type,Values=ebs" \
"Name=name,Values= ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server*" \
--query "reverse(sort_by(Images, &ImageId))[:2].[ImageId]" --output text
ami-0fa37863afb290840
ami-0e2512bd9da751ea8
```
:bulb: In the following task(s), we use the first AMI found.
Initialize your sandbox with: Initialize your sandbox with:
``` shell ``` shell
...@@ -497,10 +500,46 @@ lcl$ terraform output -json instance_tags ...@@ -497,10 +500,46 @@ lcl$ terraform output -json instance_tags
instance. instance.
Did you try to SSH into the instance you created via TF? It cannot work, Did you try to SSH into the instance you created via TF? It cannot work,
because we did not instructed TF about users, keys or anything else. **This is because we did not instructed TF about networks, users, keys or anything
left entirely to you as an exercise.** You need to: else. **This is left entirely to you as an exercise.** You need to:
1. Destroy your infrastructure. There's a special TF command for that.
1. Create an SSH key pair `tf-cloud-init`.
1. Create a new cloud-init file
`~/terraform/AWS/scripts/add-ssh.yaml` with the following content:
``` yaml
#cloud-config
#^^^^^^^^^^^^
# DO NOT TOUCH the first line!
---
groups:
- ubuntu: [root, sys]
- terraform
users:
- default
- name: terraform
gecos: terraform
primary_group: terraform
groups: users, admin
ssh_authorized_keys:
- <your-SSH-pub-key-on-one-line>
```
:warning: **Mind that the first line of this file must spell exactly
`#cloud-config`**!
1. Modify the `main.tf` as follows:
1. add a `resource` block of type `"aws_security_group"` allowing ingress
ports 22 and 80 from any address, with any egress port open;
1. add a `data` block referencing the above authorization file as a
`"template_file"` type;
1. extend the `"aws_instance"` resource to:
1. associate a public IP address,
1. link the `data` block to a user data attribute;
1. add an output `"public_ip"`.
When done, *init* your new plan, *validate* and *apply* it. Verify that you
can SSH as user `terraform` (not the customary `ubuntu`) into your instance:
0. Destroy your infrastructure. There's a special TF command for that. ``` shell
1. Create an SSH key pair. lcl$ ssh terraform@$(terraform output -raw public_ip) -i ../tf-cloud-init
2. Extend the `main.tf` with a `data` block referencing ```
...
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment