Terraform works at the folder level, pulling out all .tf files (and by default the terraform.tfvars file).
So, we do something similar to Anton's answer , but we get rid of some complexity around things patterns with sed. So, as a basic example, your structure might look like this:
$ tree -a --dirsfirst . βββ components β βββ application.tf β βββ common.tf β βββ global_component1.tf β βββ global_component2.tf βββ modules β βββ module1 β βββ module2 β βββ module3 βββ production β βββ customer1 β β βββ application.tf -> ../../components/application.tf β β βββ common.tf -> ../../components/common.tf β β βββ terraform.tfvars β βββ customer2 β β βββ application.tf -> ../../components/application.tf β β βββ common.tf -> ../../components/common.tf β β βββ terraform.tfvars β βββ global β βββ common.tf -> ../../components/common.tf β βββ global_component1.tf -> ../../components/global_component1.tf β βββ global_component2.tf -> ../../components/global_component2.tf β βββ terraform.tfvars βββ staging β βββ customer1 β β βββ application.tf -> ../../components/application.tf β β βββ common.tf -> ../../components/common.tf β β βββ terraform.tfvars β βββ customer2 β β βββ application.tf -> ../../components/application.tf β β βββ common.tf -> ../../components/common.tf β β βββ terraform.tfvars β βββ global β βββ common.tf -> ../../components/common.tf β βββ global_component1.tf -> ../../components/global_component1.tf β βββ terraform.tfvars βββ apply.sh βββ destroy.sh βββ plan.sh βββ remote.sh
Here you launch your / apply / destroy plan from the root level, where shell scripts process things like cd'ing into a directory and run terraform get -update=true , and also run terraform init for the folder so you get a unique state file key for S3, allowing you to track the status for each folder independently.
The above solution has common modules that wrap resources to provide a common interface for things (for example, our EC2 instances are marked in a certain way depending on some input variables, and also taking into account the private Route53 record), and then the βimplemented componentsβ.
These components contain a bunch of modules / resources that Terraform will use in the same folder. Thus, we could put ELB, some application servers and a database under application.tf , and then symbolic location binding gives us one place to manage with Terraform. If we may have some differences in resources for location, then they will be separated. In the above example, you can see that staging/global has global_component2.tf , which is not in production. This may be what applies only in non-production environments, such as some network controls, to prevent Internet access to the environment.
The real benefit is that everything can be easily viewed directly in the source control for developers, rather than having a template template that issues the Terraform code you want.
It also helps to follow DRY, where the only real differences between the environments in the terraform.tfvars files are in places and make it easy to check for changes before putting them into action, as each folder looks a lot like the other.