Terraform: state management for multi-tenancy

As we evaluate Terraform to replace (in part) our Ansible process for multi-tasking SaaS, we understand Terraform's convenience, performance, and reliability because we can handle infrastructure changes (add / remove) smoothly, monitoring the infrared state (this is very cool )

Our application is a multi-site SaaS, which we provide single copies for our customers. In Ansible, we have our own dynamic resource (like the dynamic resources of EC2). We spend many Terraform books and manuals where many say that many environmental conditions must be managed separately and remotely in Terraform, but they all look like static envs (like Dev / Staging / Prod).

Is there any good practice or a real-life example of managing dynamic state inventory for multi-tenant applications? We would like to track the status of each set of customer instances - it’s easy to add changes to them.

One approach may be to create a directory for each client and place the * .tf scripts inside, which will call our module, located somewhere global. State files can be placed on S3, so we can fill in the changes for each individual client if necessary.

+5
source share
2 answers

Terraform works at the folder level, pulling out all .tf files (and by default the terraform.tfvars file).

So, we do something similar to Anton's answer , but we get rid of some complexity around things patterns with sed. So, as a basic example, your structure might look like this:

 $ tree -a --dirsfirst . β”œβ”€β”€ components β”‚  β”œβ”€β”€ application.tf β”‚  β”œβ”€β”€ common.tf β”‚  β”œβ”€β”€ global_component1.tf β”‚  └── global_component2.tf β”œβ”€β”€ modules β”‚  β”œβ”€β”€ module1 β”‚  β”œβ”€β”€ module2 β”‚  └── module3 β”œβ”€β”€ production β”‚  β”œβ”€β”€ customer1 β”‚  β”‚  β”œβ”€β”€ application.tf -> ../../components/application.tf β”‚  β”‚  β”œβ”€β”€ common.tf -> ../../components/common.tf β”‚  β”‚  └── terraform.tfvars β”‚  β”œβ”€β”€ customer2 β”‚  β”‚  β”œβ”€β”€ application.tf -> ../../components/application.tf β”‚  β”‚  β”œβ”€β”€ common.tf -> ../../components/common.tf β”‚  β”‚  └── terraform.tfvars β”‚  └── global β”‚  β”œβ”€β”€ common.tf -> ../../components/common.tf β”‚  β”œβ”€β”€ global_component1.tf -> ../../components/global_component1.tf β”‚  β”œβ”€β”€ global_component2.tf -> ../../components/global_component2.tf β”‚  └── terraform.tfvars β”œβ”€β”€ staging β”‚  β”œβ”€β”€ customer1 β”‚  β”‚  β”œβ”€β”€ application.tf -> ../../components/application.tf β”‚  β”‚  β”œβ”€β”€ common.tf -> ../../components/common.tf β”‚  β”‚  └── terraform.tfvars β”‚  β”œβ”€β”€ customer2 β”‚  β”‚  β”œβ”€β”€ application.tf -> ../../components/application.tf β”‚  β”‚  β”œβ”€β”€ common.tf -> ../../components/common.tf β”‚  β”‚  └── terraform.tfvars β”‚  └── global β”‚  β”œβ”€β”€ common.tf -> ../../components/common.tf β”‚  β”œβ”€β”€ global_component1.tf -> ../../components/global_component1.tf β”‚  └── terraform.tfvars β”œβ”€β”€ apply.sh β”œβ”€β”€ destroy.sh β”œβ”€β”€ plan.sh └── remote.sh 

Here you launch your / apply / destroy plan from the root level, where shell scripts process things like cd'ing into a directory and run terraform get -update=true , and also run terraform init for the folder so you get a unique state file key for S3, allowing you to track the status for each folder independently.

The above solution has common modules that wrap resources to provide a common interface for things (for example, our EC2 instances are marked in a certain way depending on some input variables, and also taking into account the private Route53 record), and then the β€œimplemented components”.

These components contain a bunch of modules / resources that Terraform will use in the same folder. Thus, we could put ELB, some application servers and a database under application.tf , and then symbolic location binding gives us one place to manage with Terraform. If we may have some differences in resources for location, then they will be separated. In the above example, you can see that staging/global has global_component2.tf , which is not in production. This may be what applies only in non-production environments, such as some network controls, to prevent Internet access to the environment.

The real benefit is that everything can be easily viewed directly in the source control for developers, rather than having a template template that issues the Terraform code you want.

It also helps to follow DRY, where the only real differences between the environments in the terraform.tfvars files are in places and make it easy to check for changes before putting them into action, as each folder looks a lot like the other.

+5
source

Your suggested approach sounds right to me, but there are a few more things you can consider.

Store the original Terraform templates ( _template in the tree below) as a versioned artifact (e.g. git repo, for example) and just pass in the key value properties to be able to recreate your infrastructure. Thus, you will have a very small portion of the copied Terraform configuration code lying in directories.

Here's what it looks like:

 /tf-infra β”œβ”€β”€ _global β”‚  └── global β”‚  β”œβ”€β”€ README.md β”‚  β”œβ”€β”€ main.tf β”‚  β”œβ”€β”€ outputs.tf β”‚  β”œβ”€β”€ terraform.tfvars β”‚  └── variables.tf └── staging └── eu-west-1 β”œβ”€β”€ saas β”‚  β”œβ”€β”€ _template β”‚  β”‚  └── dynamic.tf.tpl β”‚  β”œβ”€β”€ customer1 β”‚  β”‚  β”œβ”€β”€ auto-generated.tf β”‚  β”‚  └── terraform.tfvars β”‚  β”œβ”€β”€ customer2 β”‚  β”‚  β”œβ”€β”€ auto-generated.tf β”‚  β”‚  └── terraform.tfvars ... 

Two helper scripts are needed:

Shared terraform status files, as well as resources that must be global (the _global directory above), can be saved to S3 so other layers can access them.

PS: I am very open for comments on the proposed solution, because it is an interesting task to work on :)

+2
source

Source: https://habr.com/ru/post/1266292/


All Articles